Project M2: Audio Classification with Neural Networks¶

Goal of this project was to gain first experiences in audio classification with neural networks and the involved libraries and techniques.

In order to avoid wasting too much time on data gathering, data preparation and data cleansing two different popular benchmark datasets have been used: 1) AudioMNIST (-> More of a beginners dataset) 2) Environmental Sound Classification (ESC-10) (-> A more advanced dataset)

In [1]:
import warnings
warnings.filterwarnings('ignore')

from pathlib import Path
import os
from tqdm import tqdm
import json

from IPython.display import Audio
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np

from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, classification_report
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout, MaxPool2D, BatchNormalization
from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
import tensorflow_hub as hub

# Special libraries for audio processing
import librosa, librosa.display # Important library in audio processing
from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift, Gain # Interesting library for audio data augmentation
import soundfile as sf # Writing wav files

1. AudioMNIST¶

The first part of this notebook builds models based on the AudioMNIST dataset. The dataset was introduced by Becker et al. (2019) as an equivalent the standard benchmark dataset commonly used in image classification tasks (e.g. MNIST, CIFAR).

Paper: https://arxiv.org/pdf/1807.03418.pdf
Original Dataset: https://github.com/soerenab/AudioMNIST

Brief description of data:

  • 60 different speakers
  • Every speaker speaks number 0 to 9, each number 50 times
  • 30'000 records in total
  • All Audios have sampling frequency of 48kHz and were saved in 16 bit integer format
  • Audio file names are stored with following pattern: "{speakerid/{number}_{speakerid}_{sampleid}.wav", e.g. Speaker 1, Number 0, Sample 1: "01/00_01_0"
  • audioMNIST_meta.txt contains information on accent, age, gender, native speaker flag, origin and recording date

1.1. EDA¶

In [2]:
source_path = Path('../AudioMNIST/data')
metadata_path = os.path.join('../AudioMNIST/audioMNIST_meta.txt')
In [5]:
# Load metadata
metadata = metaData = json.load(open(metadata_path))
metadata_df = pd.DataFrame.from_dict(metadata, orient='index')
metadata_df.head()
Out[5]:
accent age gender native speaker origin recordingdate recordingroom
01 german 30 male no Europe, Germany, Wuerzburg 17-06-22-11-04-28 Kino
02 German 25 male no Europe, Germany, Hamburg 17-06-26-17-57-29 Kino
03 German 31 male no Europe, Germany, Bremen 17-06-30-17-34-51 Kino
04 German 23 male no Europe, Germany, Helmstedt 17-06-30-18-09-14 Kino
05 German 25 male no Europe, Germany, Hameln 17-07-06-10-53-10 Kino
In [3]:
example_path = os.path.join(source_path, '01/0_01_0.wav')

signal, sr = librosa.load(example_path, sr=None) # sr=None; load file with original sampling rate
print('Sampling rate: ', sr)
duration = librosa.get_duration(signal, sr=sr)
print('Duration: ', duration)
Sampling rate:  48000
Duration:  0.7474375
In [4]:
signal.shape
Out[4]:
(35877,)
In [5]:
# Librosa gives normalized amplitude of input data, scipy wavefile does not: https://stackoverflow.com/questions/50062358/difference-between-load-of-librosa-and-read-of-scipy-io-wavfile

'''
Waveform visualization is called the time-domain representation of a given signal. 
Shows loudness (amplitude) of sound wave changing with time. 
Amplitude = 0 = silence
'''

librosa.display.waveshow(signal, sr=sr, alpha=0.4)
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.title('Waveform of Signal')
plt.show()
In [6]:
signal
Out[6]:
array([0.00045776, 0.00042725, 0.00045776, ..., 0.0005188 , 0.0005188 ,
       0.0005188 ], dtype=float32)
In [7]:
plt.plot(signal[15000:20000])
Out[7]:
[<matplotlib.lines.Line2D at 0x170d03987c0>]
In [8]:
Audio(signal, rate=sr)
Out[8]:
Your browser does not support the audio element.
In [11]:
# Check signal length distribution over all files
raw_data = []

for speaker in tqdm(os.listdir(source_path)):
          
    speaker_path = os.path.join(source_path, speaker)

    for file in os.listdir(speaker_path):

        file_path = os.path.join(speaker_path, file)
        digit = file[:1]
        repetition = int(file.strip('.wav').split('_')[2])
        signal, sr = librosa.load(file_path, sr=None) # None = Use original SR
        
        raw_data.append({
            'speaker': speaker,
            'digit': digit,
            'repetition': repetition,
            'file': file,
            'signal': signal,
            'signal_length': len(signal),
            'org_sampling_rate': sr
        })
        
        
raw_data_df = pd.DataFrame(raw_data).sort_values(by=['speaker', 'digit', 'repetition'])
raw_data_df.head()
100%|██████████| 60/60 [00:17<00:00,  3.38it/s]
Out[11]:
speaker digit repetition file signal signal_length org_sampling_rate
0 01 0 0 0_01_0.wav [0.00045776367, 0.0004272461, 0.00045776367, 0... 35877 48000
1 01 0 1 0_01_1.wav [6.1035156e-05, 6.1035156e-05, 6.1035156e-05, ... 31356 48000
12 01 0 2 0_01_2.wav [-0.00088500977, -0.00088500977, -0.0008850097... 37103 48000
23 01 0 3 0_01_3.wav [9.1552734e-05, 6.1035156e-05, 9.1552734e-05, ... 39389 48000
34 01 0 4 0_01_4.wav [0.00076293945, 0.00076293945, 0.00076293945, ... 29544 48000
In [12]:
# All signals are shorter than 1 second, i.e. signal_length < org_sampling_rate
# All signals have org_sampling_rate of 48kHz
raw_data_df.describe()
Out[12]:
repetition signal_length org_sampling_rate
count 30000.00000 30000.000000 30000.0
mean 24.50000 30844.475533 48000.0
std 14.43111 5334.715104 0.0
min 0.00000 14073.000000 48000.0
25% 12.00000 26909.750000 48000.0
50% 24.50000 30336.000000 48000.0
75% 37.00000 34380.000000 48000.0
max 49.00000 47998.000000 48000.0
In [13]:
# Histogram of signal length -> we have to adjust all signals to same length later in preprocessing
raw_data_df['signal_length'].hist()
plt.title('Histogram Signal Length')
plt.show()

1.2. STFT, Spectrograms and MFCC¶

Instead of using the waveform representation of the audio signal, the spectrogram is used as an input feature for deep neural networks. Spectrograms are built on top of Short Time Fourier Transform (STFT) and combine the time domain and frequency domain of a audio signal.

Illustration Fourier Transform

image.png

Source: https://towardsdatascience.com/understanding-audio-data-fourier-transform-fft-spectrogram-and-speech-recognition-a4072d228520

Illustration STFT

image.png

Source: https://towardsdatascience.com/audio-deep-learning-made-simple-part-3-data-preparation-and-augmentation-24c6e1f6b52

In [13]:
# Plot of signal in frequency dimension with STFT
# See https://www.tutorialexample.com/understand-n_fft-hop_length-win_length-in-audio-processing-librosa-tutorial/
# FFT alone is not useful, since we capture the whole time series in frequency domain and loss time information, so we neede STFT

# In this example we take reduce our signal to window-size and perform stft with no overlapping
n_fft = 512 # frame size recommended for speach... 
ft = np.abs(librosa.stft(signal[:n_fft], hop_length=n_fft+1)) # No overlapping in this plot
plt.plot(ft)
plt.title('Spectrum')
plt.xlabel('Frequency Bin')
plt.ylabel('Amplitude')
Out[13]:
Text(0, 0.5, 'Amplitude')

Spectrogram

A spectrogram brings together the frequency domain (as frequency bins on y-axis) from the STFT and the time domain from the waveform (on x-axis). As a third dimension it shows the energy of the signal in terms of decibel (DB).

In [14]:
# Example Normal Spectrogram
signal, sr = librosa.load(example_path, sr=8000) 

stft = np.abs(librosa.stft(signal, n_fft=512, hop_length=35)) # We convert all amplitudes to positive values
stft_db = librosa.amplitude_to_db(stft) # Note: This maps amplitude to the logarithmic decibel scale, librosa maps the highest amplitude to db 0, see: https://stackoverflow.com/questions/63347977/what-is-the-conceptual-purpose-of-librosa-amplitude-to-db

librosa.display.specshow(stft_db, y_axis='linear', x_axis='time', sr=sr, hop_length=35) 
plt.title('Example Spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.show()

print('Shape Signal: ', signal.shape)
print('Shape STFT: ', stft_db.shape)
Shape Signal:  (5980,)
Shape STFT:  (257, 171)

Notes Spectrogram and STFT:

  • Higher n_fft (i.e. number of frequency bins) = higher frequency resolution (more frequency bins)
  • Smaller hop_length = higher time resolution...
  • See: https://music-classification.github.io/tutorial/part2_basics/input-representations.html
  • Shape is ((1 + n_fft) / 2, (number of data points / hop_length)).... n_fft defines how the frequency space is divided, i.e. how many frequency bins are created... we only see half of the bins, since FFT is symmetrical, see: https://ch.mathworks.com/matlabcentral/answers/322142-what-are-frequency-bins
  • Maximum frequency captured by fft is sr / 2, see: https://www.nti-audio.com/en/support/know-how/fast-fourier-transform-fft#:~:text=The%20sampling%20rate%20or%20sampling,2%5E10%20%3D%201024%20samples)
  • 512 recommended for speech (see librosa page)...
  • When hop_length = n_fft, then we have no overlap
  • Amplitude to DB applies a Log and DB is always relative and can be negative

Mel spectrogram

A Mel spectrogram uses the concept of the Mel scale to represent the signal in frequency domain. The Mel scale takes into account that humans don't perceive frequency linear, but rather on a logarithmic scale (i.e. the perceived distance between 500Hz and 1000Hz is not the same as between 10500Hz and 11000Hz).

For more details: https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53

In [15]:
# Example Mel spectrogram
mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=512, hop_length=35, n_mels=128)
mel_db = librosa.power_to_db(mel)

librosa.display.specshow(mel_db, sr=sr, x_axis='time', y_axis='mel', hop_length=35)
plt.title('Example Mel Spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.show()

print('Shape: ', mel_db.shape)
Shape:  (128, 171)

Notes Mel Spectrogram:

  • Power to DB instead amplitude to DB
  • Default n_mels in librosa is 128
  • Shape is n_mels in rows, time in columns
  • More bins more frequency resolution

Mel frequency cepstral coefficients (MFCCs)

A very popular feature representation for audio classificatoin are Mel frequency cepstral coefficients (MFCCs). MFCCs are a small set of coefficents (usually 10-20) extracted from spectrogram representations. It models the characteristics of the human voice and is suited for speech recognition.

"Though the argumentation for the MFCCs is not without problems, it has become the most used feature in speech and audio recognition applications. It is used because it works and because it has relatively low complexity and it is straightforward to implement. Simply stated, if you're unsure which inputs to give to a speech and audio recognition engine, try first the MFCCs." (https://wiki.aalto.fi/display/ITSP/Cepstrum+and+MFCC)

Details including some math can be found in above link.

In [188]:
# Example MFCC
mfcc = librosa.feature.mfcc(signal, sr=sr, n_fft=512, hop_length=35, n_mfcc=14)
librosa.display.specshow(mfcc, sr=sr, hop_length=35, x_axis='time')

plt.title('Example MFCC')
plt.xlabel('Time')
plt.ylabel('MFCC')
plt.colorbar()
plt.show()
In [17]:
# Create spectrogram plots for example files for digit 0 to 9
fig, axs = plt.subplots(nrows=5, ncols=2, figsize=(10, 20))

digits = [i for i in range(10)] # Digit 0 to 9

# loop through digits and axes
for digit, ax in zip(digits, axs.ravel()):
    
    # Load file from speaker 1
    rep = np.random.randint(0,49) # Random repetition    
    file = f'{digit}_01_{rep}.wav'
    example_path = os.path.join(source_path, f'01/{file}')
    signal, sr = librosa.load(example_path, sr=8000) 
    
    # Create spectrogram
    stft = np.abs(librosa.stft(signal, n_fft=512, hop_length=35))
    stft_db = librosa.amplitude_to_db(stft)
    
    # Display spectrogram
    img = librosa.display.specshow(stft_db, y_axis='linear', x_axis='time', ax=ax, sr=sr, hop_length=35)
    
    # Format plot
    ax.set_title(f'Digit: {digit}, File: {file}')
    fig.colorbar(img, ax=ax, format='%+2.0f dB')

fig.tight_layout()
plt.show()
In [18]:
# Create Mel spectrogram plots for example files for digit 0 to 9
fig, axs = plt.subplots(nrows=5, ncols=2, figsize=(10, 20))

digits = [i for i in range(10)] # Digit 0 to 9

# loop through digits and axes
for digit, ax in zip(digits, axs.ravel()):
    
    # Load file from speaker 1
    rep = np.random.randint(0,49) # Random repetition    
    file = f'{digit}_01_{rep}.wav'
    example_path = os.path.join(source_path, f'01/{file}')
    signal, sr = librosa.load(example_path, sr=8000) 

    # Create Mel spectrogram
    mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=512, hop_length=35)
    mel_db = librosa.power_to_db(mel)
    
    # Display spectrogram
    img = librosa.display.specshow(mel_db, x_axis='time', y_axis='mel', ax=ax, sr=sr, hop_length=35)
    
    # Format plot
    ax.set_title(f'Digit: {digit}, File: {file}')
    fig.colorbar(img, ax=ax, format='%+2.0f dB')

fig.tight_layout()
plt.show()
In [19]:
# Create MFCC plots for example files for digit 0 to 9
fig, axs = plt.subplots(nrows=5, ncols=2, figsize=(10, 20))

digits = [i for i in range(10)] # Digit 0 to 9

# loop through digits and axes
for digit, ax in zip(digits, axs.ravel()):
    
    # Load file from speaker 1
    rep = np.random.randint(0,49) # Random repetition    
    file = f'{digit}_01_{rep}.wav'
    example_path = os.path.join(source_path, f'01/{file}')
    signal, sr = librosa.load(example_path, sr=8000) 

    # Create MFCC
    mfcc = librosa.feature.mfcc(signal, sr=sr, n_fft=512, hop_length=35, n_mfcc=14)
    
    # Display MFCC
    img = librosa.display.specshow(mfcc, sr=sr, hop_length=35, ax=ax, x_axis='time')
    
    # Format plot
    ax.set_title(f'Digit: {digit}, File: {file}')
    fig.colorbar(img, ax=ax)

fig.tight_layout()
plt.show()

Spectrogram vs. Spectrogram

There is not only one single way to extract a spectrogram (or also MFCCs) from a signal. Instead, there are many parameters that can be changed (e.g. sampling rate, number of frequency bins, hop length) and this results in very different spectrogram representations of a signal.

In [190]:
# Show how different spectrograms can look like
setups = [
    {
        'setup': 'Default', 
        'sr': 48000, 
        'n_fft': 2048, 
        'hop_length': 512, 
        'win_length': 2048, 
        'scale': 'linear'
    }, 
    {
        'setup': 'Lower Sample Rate / Log', 
        'sr': 8000, 
        'n_fft': 2048, 
        'hop_length': 512, 
        'win_length': 2048, 
        'scale': 'log'
    }, 
    {
        'setup': 'Low SR / 512 n_fft / Log', 
        'sr': 8000, 
        'n_fft': 512, 
        'hop_length': 35, 
        'win_length': 512, 
        'scale': 'log'
    }, 
    {
        'setup': 'Extrem', 
        'sr': 48000, 
        'n_fft': 20, 
        'hop_length': 35, 
        'win_length': 10, 
        'scale': 'linear'
    }
]

# Create spectrogram plots for example files for digit 0 to 9
file = '0_01_0.wav'
example_path = os.path.join(source_path, f'01/{file}')

fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(12, 9))

# loop through setups and axes
for setup, ax in zip(setups, axs.ravel()):
    
    setup_name = setup['setup']
    
    # Load example file
    signal, sr = librosa.load(example_path, sr=setup['sr']) 
    
    # Create spectrogram
    stft = np.abs(librosa.stft(signal, n_fft=setup['n_fft'], hop_length=setup['hop_length']))
    stft_db = librosa.amplitude_to_db(stft)
    
    # Display spectrogram
    img = librosa.display.specshow(stft_db, y_axis=setup['scale'], x_axis='time', ax=ax, sr=setup['sr'], hop_length=setup['hop_length'])
    
    # Format plot
    ax.set_title(f'Setup: {setup_name}, File: {file}')
    fig.colorbar(img, ax=ax, format='%+2.0f dB')

fig.tight_layout()
plt.show()

1.3. Preprocessing¶

  • Reduce data points with resampling to 8000 Hz
  • Bring all files to same length, zero padding both ends
  • Create spectrograms with Short Time Fourier Transform (STFT)
  • Create Mel Spectrograms
  • Create Mel-Frequency Cepstral Coefficients (MFCC)
In [16]:
n_fft = 512
hop_length = 35
n_mfcc = 14
In [17]:
# Load with sr = 8000
signal, sr = librosa.load(example_path, sr=8000)
signal.shape
Out[17]:
(5980,)
In [193]:
# Padding signal front/back with zeros to 8000
n_zeros = sr - len(signal)
n_zeros_front = int(n_zeros*.5)
n_zeros_back = n_zeros - n_zeros_front
signal_padded = np.pad(signal, (n_zeros_front, n_zeros_back))
signal_padded.shape
Out[193]:
(8000,)
In [194]:
# Perform stft and convert to db
stft = np.abs(librosa.stft(signal_padded, n_fft=n_fft, hop_length=hop_length))
stft_db = librosa.amplitude_to_db(stft, ref=np.max)
stft_db.shape
Out[194]:
(257, 229)
In [195]:
# Mel Spectrogram
mel = librosa.feature.melspectrogram(signal_padded, sr=sr, n_fft=n_fft, hop_length=hop_length)
mel_db = librosa.power_to_db(mel)
mel_db.shape
Out[195]:
(128, 229)
In [196]:
# MFCC
mfcc = librosa.feature.mfcc(signal_padded, n_fft=n_fft, n_mfcc=n_mfcc, hop_length=hop_length)
mfcc.shape
Out[196]:
(14, 229)
In [197]:
# Loop through speaker folders and create spectrogram data... we don't need to draw and save images
# Input to Neural Network are not the images, but the spectrogram data
# see https://towardsdatascience.com/audio-deep-learning-made-simple-sound-classification-step-by-step-cebc936bbe5
preprocessed_data = []

for speaker in tqdm(os.listdir(source_path)):
          
    speaker_path = os.path.join(source_path, speaker)

    for file in os.listdir(speaker_path):
        
        # Load file
        file_path = os.path.join(speaker_path, file)
        signal, sr = librosa.load(file_path, sr=8000) # Default sr
        
        # Padding signal
        n_zeros = sr - len(signal)
        n_zeros_front = int(n_zeros*.5)
        n_zeros_back = n_zeros - n_zeros_front
        signal_padded = np.pad(signal, (n_zeros_front, n_zeros_back))
        
        # Create STFT spectrogram and convert to db scale
        stft = np.abs(librosa.stft(signal_padded, n_fft=n_fft, hop_length=hop_length))
        stft_db = librosa.amplitude_to_db(stft, ref=np.max)
        stft_db = stft_db.reshape(1, stft_db.shape[0], stft_db.shape[1])
        
        # Create Mel Spectrogram and convert to db scale
        mel = librosa.feature.melspectrogram(signal_padded, sr=sr, n_fft=n_fft, hop_length=hop_length)
        mel_db = librosa.power_to_db(mel)
        mel_db = mel_db.reshape(1, mel_db.shape[0], mel_db.shape[1])
        
        # Extract MFCCs
        mfcc = librosa.feature.mfcc(signal_padded, sr=sr, n_fft=n_fft, n_mfcc=n_mfcc, hop_length=hop_length)
        mfcc = mfcc.reshape(1, mfcc.shape[0], mfcc.shape[1])
         
        # Get label and repetition
        digit = file[:1]
        repetition = int(file.strip('.wav').split('_')[2])
        
        preprocessed_data.append({
            'speaker': speaker,
            'digit': digit,
            'repetition': repetition,
            'file': file,
            'signal_padded': signal_padded,
            'stft_db': stft_db,
            'mel_db': mel_db,
            'mfcc': mfcc
        })
        
preprocessed_data_df = pd.DataFrame(preprocessed_data)
preprocessed_data_df.head()
100%|██████████| 60/60 [29:19<00:00, 29.33s/it]
Out[197]:
speaker digit repetition file signal_padded stft_db mel_db mfcc
0 01 0 0 0_01_0.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-92.53915, -92.53915, -92.53915, -92.53915,... [[[-1046.9609, -1046.9609, -1046.9609, -1046.9...
1 01 0 1 0_01_1.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-97.5652, -97.5652, -97.5652, -97.5652, -97... [[[-1103.8242, -1103.8242, -1103.8242, -1103.8...
2 01 0 10 0_01_10.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-88.62083, -88.62083, -88.62083, -88.62083,... [[[-1002.6302, -1002.6302, -1002.6302, -1002.6...
3 01 0 11 0_01_11.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-90.27966, -90.27966, -90.27966, -90.27966,... [[[-1021.39777, -1021.39777, -1021.39777, -102...
4 01 0 12 0_01_12.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-93.71318, -93.71318, -93.71318, -93.71318,... [[[-1060.2435, -1060.2435, -1060.2435, -1060.2...
In [198]:
# Save data
preprocessed_data_df.to_pickle('preprocessed_data_audiomnist.pkl')

1.4. Spectrogram model¶

In [26]:
# Load preprocessed data
preprocessed_data_df = pd.read_pickle('preprocessed_data_audiomnist.pkl')
preprocessed_data_df.head()
Out[26]:
speaker digit repetition file signal_padded stft_db mel_db mfcc
0 01 0 0 0_01_0.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-92.53915, -92.53915, -92.53915, -92.53915,... [[[-1046.9609, -1046.9609, -1046.9609, -1046.9...
1 01 0 1 0_01_1.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-97.5652, -97.5652, -97.5652, -97.5652, -97... [[[-1103.8242, -1103.8242, -1103.8242, -1103.8...
2 01 0 10 0_01_10.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-88.62083, -88.62083, -88.62083, -88.62083,... [[[-1002.6302, -1002.6302, -1002.6302, -1002.6...
3 01 0 11 0_01_11.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-90.27966, -90.27966, -90.27966, -90.27966,... [[[-1021.39777, -1021.39777, -1021.39777, -102...
4 01 0 12 0_01_12.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-93.71318, -93.71318, -93.71318, -93.71318,... [[[-1060.2435, -1060.2435, -1060.2435, -1060.2...
In [27]:
# Spectrogram feature
X = preprocessed_data_df['stft_db'].values
X = np.concatenate(X, axis=0)
X.shape
Out[27]:
(30000, 257, 229)
In [28]:
n_recs = X.shape[0]
n_rows = X.shape[1]
n_cols = X.shape[2]
In [30]:
X = X.reshape(n_recs, n_rows*n_cols) # Flatten for StandardScaling
X.shape
Out[30]:
(30000, 58853)
In [31]:
# Target variable
y = preprocessed_data_df['digit'].values

# To categorical - for softmax multiclass in keras
y = to_categorical(y, num_classes=10)
y.shape
Out[31]:
(30000, 10)
In [32]:
# Filenames
files = preprocessed_data_df['file'].values
files.shape
Out[32]:
(30000,)
In [33]:
# Train/test split with stratify on y (we want all digits being evenly represented in train and test)
X_train, X_test, y_train, y_test, files_train, files_test = train_test_split(X, y, files, test_size=0.2, stratify=y)
print('X_train: ', X_train.shape)
print('y_train: ', y_train.shape)
print('files_train: ', files_train.shape)
print('X_test: ', X_test.shape)
print('y_test: ', y_test.shape)
print('files_test: ', files_test.shape)
X_train:  (24000, 58853)
y_train:  (24000, 10)
files_train:  (24000,)
X_test:  (6000, 58853)
y_test:  (6000, 10)
files_test:  (6000,)
In [11]:
# Proof we shuffled all the files (digit, speaker and repetition)
files_train[:20]
Out[11]:
array(['5_47_0.wav', '3_24_44.wav', '8_57_42.wav', '4_50_45.wav',
       '9_01_33.wav', '8_14_38.wav', '4_39_18.wav', '9_38_42.wav',
       '3_15_37.wav', '8_16_43.wav', '7_15_33.wav', '5_19_27.wav',
       '9_27_8.wav', '6_38_32.wav', '5_58_5.wav', '2_33_14.wav',
       '8_55_12.wav', '6_25_26.wav', '2_43_36.wav', '1_05_13.wav'],
      dtype=object)
In [12]:
# Proof we shuffled all the files
files_test[:20]
Out[12]:
array(['0_45_32.wav', '4_44_30.wav', '8_11_22.wav', '7_17_13.wav',
       '3_42_1.wav', '8_42_47.wav', '7_40_33.wav', '9_37_39.wav',
       '8_31_31.wav', '5_20_8.wav', '5_10_32.wav', '0_22_37.wav',
       '3_42_33.wav', '3_43_36.wav', '5_55_19.wav', '1_36_32.wav',
       '6_21_0.wav', '5_50_4.wav', '6_32_43.wav', '0_27_39.wav'],
      dtype=object)
In [34]:
# Fit StandardScaler on training data
scaler = StandardScaler()
X_train_std = scaler.fit_transform(X_train)
X_test_std = scaler.transform(X_test)
In [35]:
X_train_std = X_train_std.reshape(X_train.shape[0], n_rows, n_cols, 1) # Reshape to 4D - needed for CNN
X_test_std = X_test_std.reshape(X_test.shape[0], n_rows, n_cols, 1)

X_train_std.shape, X_test_std.shape
Out[35]:
((24000, 257, 229, 1), (6000, 257, 229, 1))
In [39]:
# Build CNN
model = Sequential([
    Conv2D(filters=32, kernel_size=10, strides=2, padding='same', activation='relu', input_shape=(n_rows, n_cols, 1)),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=16, kernel_size=5, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Flatten(),
    #Dropout(0.5),
    Dense(units=30, activation='relu'),
    Dense(units=10, activation='softmax')
])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 129, 115, 32)      3232      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 65, 58, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 33, 29, 16)        12816     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 17, 15, 16)        0         
_________________________________________________________________
flatten (Flatten)            (None, 4080)              0         
_________________________________________________________________
dense (Dense)                (None, 30)                122430    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                310       
=================================================================
Total params: 138,788
Trainable params: 138,788
Non-trainable params: 0
_________________________________________________________________
In [40]:
# Model checkpoint to save best model
checkpoint_path = 'models/best_model_spec'
checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
In [41]:
# Compile the model with adam optimizer and default settings
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
In [42]:
# Fit
history = model.fit(X_train_std, y_train, epochs=5, validation_split=0.2, batch_size=128, callbacks=[checkpoint])
Epoch 1/5
150/150 [==============================] - 167s 1s/step - loss: 0.6787 - accuracy: 0.7867 - val_loss: 0.1199 - val_accuracy: 0.9679

Epoch 00001: val_accuracy improved from -inf to 0.96792, saving model to models\best_model_spec
INFO:tensorflow:Assets written to: models\best_model_spec\assets
Epoch 2/5
150/150 [==============================] - 169s 1s/step - loss: 0.0840 - accuracy: 0.9756 - val_loss: 0.0507 - val_accuracy: 0.9854

Epoch 00002: val_accuracy improved from 0.96792 to 0.98542, saving model to models\best_model_spec
INFO:tensorflow:Assets written to: models\best_model_spec\assets
Epoch 3/5
150/150 [==============================] - 173s 1s/step - loss: 0.0429 - accuracy: 0.9889 - val_loss: 0.0441 - val_accuracy: 0.9892

Epoch 00003: val_accuracy improved from 0.98542 to 0.98917, saving model to models\best_model_spec
INFO:tensorflow:Assets written to: models\best_model_spec\assets
Epoch 4/5
150/150 [==============================] - 166s 1s/step - loss: 0.0253 - accuracy: 0.9924 - val_loss: 0.0440 - val_accuracy: 0.9877

Epoch 00004: val_accuracy did not improve from 0.98917
Epoch 5/5
150/150 [==============================] - 163s 1s/step - loss: 0.0169 - accuracy: 0.9955 - val_loss: 0.0328 - val_accuracy: 0.9900

Epoch 00005: val_accuracy improved from 0.98917 to 0.99000, saving model to models\best_model_spec
INFO:tensorflow:Assets written to: models\best_model_spec\assets
In [43]:
# Plot training history
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))

# Plot training accuracy history
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('model accuracy')
axs[0].set_ylabel('accuracy')
axs[0].set_xlabel('epoch')
axs[0].set_ylim(0,1)
axs[0].legend(['train', 'val'], loc='lower right')

axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('model loss')
axs[1].set_ylabel('loss')
axs[1].set_xlabel('epoch')
axs[1].legend(['train', 'val'], loc='upper right')

fig.show()
In [44]:
test_loss, test_accuracy = model.evaluate(X_test_std, y_test)

print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')
188/188 [==============================] - 10s 54ms/step - loss: 0.0375 - accuracy: 0.9903
Test loss: 0.03752470761537552
Test accuracy: 0.9903333187103271
In [45]:
y_pred_proba = model.predict(X_test_std)

y_pred_test = np.array([np.argmax(y) for y in y_pred_proba])
y_pred_test
Out[45]:
array([4, 9, 0, ..., 7, 8, 1], dtype=int64)
In [46]:
y_true_test = np.array([np.argmax(y) for y in y_test])
y_true_test
Out[46]:
array([4, 9, 0, ..., 7, 8, 1], dtype=int64)
In [47]:
# Classification Report
print(classification_report(y_true_test, y_pred_test))
              precision    recall  f1-score   support

           0       0.99      0.99      0.99       600
           1       0.99      0.99      0.99       600
           2       0.98      0.99      0.99       600
           3       0.98      0.99      0.99       600
           4       0.99      0.99      0.99       600
           5       1.00      0.99      0.99       600
           6       0.99      0.99      0.99       600
           7       0.99      0.99      0.99       600
           8       0.99      0.99      0.99       600
           9       0.99      0.99      0.99       600

    accuracy                           0.99      6000
   macro avg       0.99      0.99      0.99      6000
weighted avg       0.99      0.99      0.99      6000

In [48]:
# Confusion matrix
ConfusionMatrixDisplay.from_predictions(y_true_test, y_pred_test)
plt.title('Confusion Matrix')
plt.show()
In [49]:
# Test on real data

# Load data
test_path = Path('testdata/Test_7_01_short.wav')

signal, sr = librosa.load(test_path, sr=None) # sr=None; load file with original sampling rate
print('Sampling rate: ', sr)
duration = librosa.get_duration(signal, sr=sr)
print('Duration: ', duration)
Sampling rate:  48000
Duration:  0.9771041666666667
In [50]:
Audio(signal, rate=sr)
Out[50]:
Your browser does not support the audio element.
In [51]:
# Preprocess
signal, sr = librosa.load(test_path, sr=8000) # Default sr
        
# Padding signal
n_zeros = sr - len(signal)
n_zeros_front = int(n_zeros*.5)
n_zeros_back = n_zeros - n_zeros_front
signal_padded = np.pad(signal, (n_zeros_front, n_zeros_back))

# Perform sftf and convert to db scale
stft = np.abs(librosa.stft(signal_padded, n_fft=512, hop_length=35))
stft_db = librosa.amplitude_to_db(stft, ref=np.max)
stft_db = stft_db.reshape(1, stft_db.shape[0], stft_db.shape[1])

stft_db.shape
Out[51]:
(1, 257, 229)
In [52]:
# Scaling
X_new = stft_db.reshape(1, n_rows * n_cols)
X_new_std = scaler.transform(X_new)
X_new_std = X_new_std.reshape(1, n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
In [53]:
# Test
y_pred_new = model.predict(X_new_std)
np.argmax(y_pred_new)
Out[53]:
7

1.5. Mel Spectrogram model¶

In [51]:
# Load preprocessed data
preprocessed_data_df = pd.read_pickle('preprocessed_data_audiomnist.pkl')
preprocessed_data_df.head()
Out[51]:
speaker digit repetition file signal_padded stft_db mel_db mfcc
0 01 0 0 0_01_0.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-92.53915, -92.53915, -92.53915, -92.53915,... [[[-1046.9609, -1046.9609, -1046.9609, -1046.9...
1 01 0 1 0_01_1.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-97.5652, -97.5652, -97.5652, -97.5652, -97... [[[-1103.8242, -1103.8242, -1103.8242, -1103.8...
2 01 0 10 0_01_10.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-88.62083, -88.62083, -88.62083, -88.62083,... [[[-1002.6302, -1002.6302, -1002.6302, -1002.6...
3 01 0 11 0_01_11.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-90.27966, -90.27966, -90.27966, -90.27966,... [[[-1021.39777, -1021.39777, -1021.39777, -102...
4 01 0 12 0_01_12.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-93.71318, -93.71318, -93.71318, -93.71318,... [[[-1060.2435, -1060.2435, -1060.2435, -1060.2...
In [102]:
# Mel spectrogram feature
X = preprocessed_data_df['mel_db'].values
X = np.concatenate(X, axis=0)
X.shape
Out[102]:
(30000, 128, 229)
In [103]:
n_recs = X.shape[0]
n_rows = X.shape[1]
n_cols = X.shape[2]
In [104]:
X = X.reshape(n_recs, n_rows*n_cols) # Flatten for StandardScaling
X.shape
Out[104]:
(30000, 29312)
In [105]:
# Target variable
y = preprocessed_data_df['digit'].values

# To categorical - for softmax multiclass in keras
y = to_categorical(y, num_classes=10)
y.shape
Out[105]:
(30000, 10)
In [106]:
# Filenames
files = preprocessed_data_df['file'].values
files.shape
Out[106]:
(30000,)
In [107]:
# Train/test split with stratify on y (we want all digits being evenly represented in train and test)
X_train, X_test, y_train, y_test, files_train, files_test = train_test_split(X, y, files, test_size=0.2, stratify=y)
print('X_train: ', X_train.shape)
print('y_train: ', y_train.shape)
print('files_train: ', files_train.shape)
print('X_test: ', X_test.shape)
print('y_test: ', y_test.shape)
print('files_test: ', files_test.shape)
X_train:  (24000, 29312)
y_train:  (24000, 10)
files_train:  (24000,)
X_test:  (6000, 29312)
y_test:  (6000, 10)
files_test:  (6000,)
In [108]:
# Fit StandardScaler on training data
scaler = StandardScaler()
X_train_std = scaler.fit_transform(X_train)
X_test_std = scaler.transform(X_test)
In [109]:
X_train_std = X_train_std.reshape(X_train.shape[0], n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
X_test_std = X_test_std.reshape(X_test.shape[0], n_rows, n_cols, 1)

X_train_std.shape, X_test_std.shape
Out[109]:
((24000, 128, 229, 1), (6000, 128, 229, 1))
In [110]:
# Build simple CNN
model = Sequential([
    Conv2D(filters=32, kernel_size=10, strides=2, padding='same', activation='relu', input_shape=(n_rows, n_cols, 1)),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=16, kernel_size=5, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Flatten(),
    Dense(units=30, activation='relu'),
    Dense(units=10, activation='softmax')
])

model.summary()
Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_8 (Conv2D)            (None, 64, 115, 32)       3232      
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 32, 58, 32)        0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 16, 29, 16)        12816     
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 8, 15, 16)         0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 1920)              0         
_________________________________________________________________
dense_8 (Dense)              (None, 30)                57630     
_________________________________________________________________
dense_9 (Dense)              (None, 10)                310       
=================================================================
Total params: 73,988
Trainable params: 73,988
Non-trainable params: 0
_________________________________________________________________
In [111]:
# Model checkpoint to save best model
checkpoint_path = 'models/best_model_mel'
checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
In [112]:
# Compile the model with adam optimizer and default settings
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
In [113]:
# Fit
history = model.fit(X_train_std, y_train, epochs=5, validation_split=0.2, batch_size=128, callbacks=[checkpoint])
Epoch 1/5
150/150 [==============================] - 160s 1s/step - loss: 0.8187 - accuracy: 0.7262 - val_loss: 0.0652 - val_accuracy: 0.9802

Epoch 00001: val_accuracy improved from -inf to 0.98021, saving model to models\best_model_mel
INFO:tensorflow:Assets written to: models\best_model_mel\assets
Epoch 2/5
150/150 [==============================] - 166s 1s/step - loss: 0.0697 - accuracy: 0.9809 - val_loss: 0.0585 - val_accuracy: 0.9812

Epoch 00002: val_accuracy improved from 0.98021 to 0.98125, saving model to models\best_model_mel
INFO:tensorflow:Assets written to: models\best_model_mel\assets
Epoch 3/5
150/150 [==============================] - 191s 1s/step - loss: 0.0475 - accuracy: 0.9846 - val_loss: 0.0258 - val_accuracy: 0.9937

Epoch 00003: val_accuracy improved from 0.98125 to 0.99375, saving model to models\best_model_mel
INFO:tensorflow:Assets written to: models\best_model_mel\assets
Epoch 4/5
150/150 [==============================] - 180s 1s/step - loss: 0.0296 - accuracy: 0.9913 - val_loss: 0.0337 - val_accuracy: 0.9890

Epoch 00004: val_accuracy did not improve from 0.99375
Epoch 5/5
150/150 [==============================] - 182s 1s/step - loss: 0.0211 - accuracy: 0.9940 - val_loss: 0.0256 - val_accuracy: 0.9933

Epoch 00005: val_accuracy did not improve from 0.99375
In [114]:
# Plot training history
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))

# Plot training accuracy history
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('model accuracy')
axs[0].set_ylabel('accuracy')
axs[0].set_xlabel('epoch')
axs[0].set_ylim(0,1)
axs[0].legend(['train', 'val'], loc='lower right')

axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('model loss')
axs[1].set_ylabel('loss')
axs[1].set_xlabel('epoch')
axs[1].legend(['train', 'val'], loc='upper right')

fig.show()
In [115]:
test_loss, test_accuracy = model.evaluate(X_test_std, y_test)

print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')
188/188 [==============================] - 8s 42ms/step - loss: 0.0321 - accuracy: 0.9910
Test loss: 0.03214897960424423
Test accuracy: 0.9909999966621399
In [116]:
y_pred_proba = model.predict(X_test_std)

y_pred_test = np.array([np.argmax(y) for y in y_pred_proba])
y_pred_test
Out[116]:
array([2, 2, 1, ..., 0, 6, 8], dtype=int64)
In [117]:
y_true_test = np.array([np.argmax(y) for y in y_test])
y_true_test
Out[117]:
array([2, 2, 1, ..., 0, 6, 8], dtype=int64)
In [118]:
# Classification Report
print(classification_report(y_true_test, y_pred_test))
              precision    recall  f1-score   support

           0       1.00      0.99      0.99       600
           1       0.99      0.99      0.99       600
           2       0.99      1.00      1.00       600
           3       0.98      0.99      0.99       600
           4       0.99      1.00      0.99       600
           5       1.00      0.98      0.99       600
           6       0.99      0.99      0.99       600
           7       0.99      0.99      0.99       600
           8       0.99      0.99      0.99       600
           9       0.99      0.99      0.99       600

    accuracy                           0.99      6000
   macro avg       0.99      0.99      0.99      6000
weighted avg       0.99      0.99      0.99      6000

In [119]:
# Confusion matrix
ConfusionMatrixDisplay.from_predictions(y_true_test, y_pred_test)
plt.title('Confusion Matrix')
plt.show()
In [120]:
# Test on real data

# Load data
test_path = Path('testdata/Test_7_01_short.wav')

signal, sr = librosa.load(test_path, sr=None) # sr=None; load file with original sampling rate
sr = librosa.get_samplerate(test_path)
print('Sampling rate: ', sr)
duration = librosa.get_duration(signal, sr=sr)
print('Duration: ', duration)
Sampling rate:  48000
Duration:  0.9771041666666667
In [121]:
# Preprocess
signal, sr = librosa.load(test_path, sr=8000) # Default sr
        
# Padding signal
n_zeros = sr - len(signal)
n_zeros_front = int(n_zeros*.5)
n_zeros_back = n_zeros - n_zeros_front
signal_padded = np.pad(signal, (n_zeros_front, n_zeros_back))

# Extract mel spectrogram and convert to db scale
mel = librosa.feature.melspectrogram(signal_padded, sr=sr, n_fft=512, hop_length=35)
mel_db = librosa.power_to_db(mel)
mel_db = mel_db.reshape(1, mel_db.shape[0], mel_db.shape[1])

mel_db.shape
Out[121]:
(1, 128, 229)
In [122]:
# Scaling
X_new = mel_db.reshape(1, n_rows * n_cols)
X_new_std = scaler.transform(X_new)
X_new_std = X_new_std.reshape(1, n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
In [123]:
# Test
y_pred_new = model.predict(X_new_std)
np.argmax(y_pred_new)
Out[123]:
7

1.6. MFCC model¶

In [75]:
# Load preprocessed data
preprocessed_data_df = pd.read_pickle('preprocessed_data_audiomnist.pkl')
preprocessed_data_df.head()
Out[75]:
speaker digit repetition file signal_padded stft_db mel_db mfcc
0 01 0 0 0_01_0.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-92.53915, -92.53915, -92.53915, -92.53915,... [[[-1046.9609, -1046.9609, -1046.9609, -1046.9...
1 01 0 1 0_01_1.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-97.5652, -97.5652, -97.5652, -97.5652, -97... [[[-1103.8242, -1103.8242, -1103.8242, -1103.8...
2 01 0 10 0_01_10.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-88.62083, -88.62083, -88.62083, -88.62083,... [[[-1002.6302, -1002.6302, -1002.6302, -1002.6...
3 01 0 11 0_01_11.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-90.27966, -90.27966, -90.27966, -90.27966,... [[[-1021.39777, -1021.39777, -1021.39777, -102...
4 01 0 12 0_01_12.wav [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-80.0, -80.0, -80.0, -80.0, -80.0, -80.0, -... [[[-93.71318, -93.71318, -93.71318, -93.71318,... [[[-1060.2435, -1060.2435, -1060.2435, -1060.2...
In [30]:
# MFCC feature
X = preprocessed_data_df['mfcc'].values
X = np.concatenate(X, axis=0)
X.shape
Out[30]:
(30000, 14, 229)
In [31]:
n_recs = X.shape[0]
n_rows = X.shape[1]
n_cols = X.shape[2]
In [32]:
X = X.reshape(n_recs, n_rows*n_cols) # Flatten for StandardScaling
X.shape
Out[32]:
(30000, 3206)
In [33]:
# Target variable
y = preprocessed_data_df['digit'].values

# To categorical - for softmax multiclass in keras
y = to_categorical(y, num_classes=10)
y.shape
Out[33]:
(30000, 10)
In [34]:
# Filenames
files = preprocessed_data_df['file'].values
files.shape
Out[34]:
(30000,)
In [35]:
# Train/test split with stratify on y (we want all digits being evenly represented in train and test)
X_train, X_test, y_train, y_test, files_train, files_test = train_test_split(X, y, files, test_size=0.2, stratify=y)
print('X_train: ', X_train.shape)
print('y_train: ', y_train.shape)
print('files_train: ', files_train.shape)
print('X_test: ', X_test.shape)
print('y_test: ', y_test.shape)
print('files_test: ', files_test.shape)
X_train:  (24000, 3206)
y_train:  (24000, 10)
files_train:  (24000,)
X_test:  (6000, 3206)
y_test:  (6000, 10)
files_test:  (6000,)
In [36]:
# Fit StandardScaler on training data
scaler = StandardScaler()
X_train_std = scaler.fit_transform(X_train)
X_test_std = scaler.transform(X_test)
In [37]:
X_train_std = X_train_std.reshape(X_train.shape[0], n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
X_test_std = X_test_std.reshape(X_test.shape[0], n_rows, n_cols, 1)

X_train_std.shape, X_test_std.shape
Out[37]:
((24000, 14, 229, 1), (6000, 14, 229, 1))
In [38]:
# Build simple CNN
model = Sequential([
    Conv2D(filters=32, kernel_size=10, strides=2, padding='same', activation='relu', input_shape=(n_rows, n_cols, 1)),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=16, kernel_size=5, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Flatten(),
    #Dropout(0.5),
    Dense(units=30, activation='relu'),
    Dense(units=10, activation='softmax')
])

model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_2 (Conv2D)            (None, 7, 115, 32)        3232      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 58, 32)         0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 2, 29, 16)         12816     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 1, 15, 16)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 240)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 30)                7230      
_________________________________________________________________
dense_3 (Dense)              (None, 10)                310       
=================================================================
Total params: 23,588
Trainable params: 23,588
Non-trainable params: 0
_________________________________________________________________
In [39]:
# Model checkpoint to save best model
checkpoint_path = 'models/best_model_mfcc'
checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
In [40]:
# Compile the model with adam optimizer and default settings
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
In [41]:
# Fit
history = model.fit(X_train_std, y_train, epochs=5, validation_split=0.2, batch_size=128, callbacks=[checkpoint])
Epoch 1/5
150/150 [==============================] - 9s 57ms/step - loss: 1.2966 - accuracy: 0.5486 - val_loss: 0.1385 - val_accuracy: 0.9592

Epoch 00001: val_accuracy improved from -inf to 0.95917, saving model to models\best_model_mfcc
INFO:tensorflow:Assets written to: models\best_model_mfcc\assets
Epoch 2/5
150/150 [==============================] - 9s 60ms/step - loss: 0.1002 - accuracy: 0.9699 - val_loss: 0.0780 - val_accuracy: 0.9808

Epoch 00002: val_accuracy improved from 0.95917 to 0.98083, saving model to models\best_model_mfcc
INFO:tensorflow:Assets written to: models\best_model_mfcc\assets
Epoch 3/5
150/150 [==============================] - 9s 62ms/step - loss: 0.0588 - accuracy: 0.9830 - val_loss: 0.0485 - val_accuracy: 0.9858

Epoch 00003: val_accuracy improved from 0.98083 to 0.98583, saving model to models\best_model_mfcc
INFO:tensorflow:Assets written to: models\best_model_mfcc\assets
Epoch 4/5
150/150 [==============================] - 10s 64ms/step - loss: 0.0331 - accuracy: 0.9911 - val_loss: 0.0490 - val_accuracy: 0.9883

Epoch 00004: val_accuracy improved from 0.98583 to 0.98833, saving model to models\best_model_mfcc
INFO:tensorflow:Assets written to: models\best_model_mfcc\assets
Epoch 5/5
150/150 [==============================] - 9s 63ms/step - loss: 0.0298 - accuracy: 0.9926 - val_loss: 0.0438 - val_accuracy: 0.9865

Epoch 00005: val_accuracy did not improve from 0.98833
In [42]:
# Plot training history
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))

# Plot training accuracy history
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('model accuracy')
axs[0].set_ylabel('accuracy')
axs[0].set_xlabel('epoch')
axs[0].set_ylim(0,1)
axs[0].legend(['train', 'val'], loc='lower right')

axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('model loss')
axs[1].set_ylabel('loss')
axs[1].set_xlabel('epoch')
axs[1].legend(['train', 'val'], loc='upper right')

fig.show()
In [43]:
test_loss, test_accuracy = model.evaluate(X_test_std, y_test)

print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')
188/188 [==============================] - 1s 4ms/step - loss: 0.0488 - accuracy: 0.9863
Test loss: 0.04877069965004921
Test accuracy: 0.9863333106040955
In [44]:
y_pred_proba = model.predict(X_test_std)

y_pred_test = np.array([np.argmax(y) for y in y_pred_proba])
y_pred_test
Out[44]:
array([0, 0, 8, ..., 0, 1, 5], dtype=int64)
In [45]:
y_true_test = np.array([np.argmax(y) for y in y_test])
y_true_test
Out[45]:
array([0, 0, 8, ..., 0, 1, 5], dtype=int64)
In [46]:
# Classification Report
print(classification_report(y_true_test, y_pred_test))
              precision    recall  f1-score   support

           0       0.99      0.99      0.99       600
           1       0.99      0.98      0.99       600
           2       0.99      0.98      0.99       600
           3       0.99      0.95      0.97       600
           4       0.98      1.00      0.99       600
           5       0.99      0.99      0.99       600
           6       0.99      0.99      0.99       600
           7       0.98      0.99      0.99       600
           8       0.96      0.99      0.97       600
           9       0.99      0.98      0.99       600

    accuracy                           0.99      6000
   macro avg       0.99      0.99      0.99      6000
weighted avg       0.99      0.99      0.99      6000

In [47]:
# Confusion matrix
ConfusionMatrixDisplay.from_predictions(y_true_test, y_pred_test)
plt.title('Confusion Matrix')
plt.show()
In [48]:
# Test on real data

# Load data
test_path = Path('testdata/Test_7_01_short.wav')

signal, sr = librosa.load(test_path, sr=None) # sr=None; load file with original sampling rate
sr = librosa.get_samplerate(test_path)
print('Sampling rate: ', sr)
duration = librosa.get_duration(signal, sr=sr)
print('Duration: ', duration)
Sampling rate:  48000
Duration:  0.9771041666666667
In [49]:
# Preprocess
signal, sr = librosa.load(test_path, sr=8000) # Default sr
        
# Padding signal
n_zeros = sr - len(signal)
n_zeros_front = int(n_zeros*.5)
n_zeros_back = n_zeros - n_zeros_front
signal_padded = np.pad(signal, (n_zeros_front, n_zeros_back))

# Extract MFCC
mfcc = librosa.feature.mfcc(signal_padded, sr=sr, n_fft=512, n_mfcc=14, hop_length=35)
mfcc = mfcc.reshape(1, mfcc.shape[0], mfcc.shape[1])
mfcc.shape
Out[49]:
(1, 14, 229)
In [50]:
# Scaling
X_new = mfcc.reshape(1, n_rows * n_cols)
X_new_std = scaler.transform(X_new)
X_new_std = X_new_std.reshape(1, n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
In [51]:
# Test
y_pred_new = model.predict(X_new_std)
np.argmax(y_pred_new)
Out[51]:
7

2. Environmental Sound Classification (ESC-10)¶

The second part of this notebook builds classification models for the ESC-10 dataset. ESC-10 is a subset of the ESC-50 dataset which was introduced by Piczak (2015).

Paper: https://www.karolpiczak.com/papers/Piczak2015-ESC-Dataset.pdf
Original Dataset: https://github.com/karolpiczak/ESC-50 or https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/YDEPUT
Leaderboard for ESC-50: https://github.com/karolpiczak/ESC-50 and https://paperswithcode.com/sota/audio-classification-on-esc-50
Metrics for ESC-10 models: https://www.researchgate.net/publication/344519283_Automatic_Environmental_Sound_Recognition_AESR_Using_Convolutional_Neural_Network

Brief description of data (from README):
The ESC-50 dataset is a labeled collection of 2000 environmental audio recordings suitable for benchmarking methods of environmental sound classification. The dataset consists of 5-second-long recordings organized into 50 semantical classes (with 40 examples per class) loosely arranged into 5 major categories. For shorter recordings than 5-second-long padding with silence was performed. The extracted samples were reconverted to a unified format (44.1 kHz, single channel, compression at 192 kbit/s).

Filename convention is '{fold}-{src_file}-{take}-{target}.wav'

fold: Prearranged folder number to perform 5-fold cross-validation (if needed)
src_file: Original source file id for 5 sec clip
take: Additional recording from same source file
target: Class to which the recording belongs

2.1. EDA¶

In [41]:
source_path = Path('../ESC-50-master/audio')
metadata_path = os.path.join('../ESC-50-master/meta/esc50.csv')
In [42]:
metadata_df = pd.read_csv(metadata_path)
metadata_df.head()
Out[42]:
filename fold target category esc10 src_file take
0 1-100032-A-0.wav 1 0 dog True 100032 A
1 1-100038-A-14.wav 1 14 chirping_birds False 100038 A
2 1-100210-A-36.wav 1 36 vacuum_cleaner False 100210 A
3 1-100210-B-36.wav 1 36 vacuum_cleaner False 100210 B
4 1-101296-A-19.wav 1 19 thunderstorm False 101296 A
In [56]:
# Number of records
metadata_df['filename'].count()
Out[56]:
2000
In [57]:
# Distribution per category
metadata_df.groupby('category')['filename'].count()
Out[57]:
category
airplane            40
breathing           40
brushing_teeth      40
can_opening         40
car_horn            40
cat                 40
chainsaw            40
chirping_birds      40
church_bells        40
clapping            40
clock_alarm         40
clock_tick          40
coughing            40
cow                 40
crackling_fire      40
crickets            40
crow                40
crying_baby         40
dog                 40
door_wood_creaks    40
door_wood_knock     40
drinking_sipping    40
engine              40
fireworks           40
footsteps           40
frog                40
glass_breaking      40
hand_saw            40
helicopter          40
hen                 40
insects             40
keyboard_typing     40
laughing            40
mouse_click         40
pig                 40
pouring_water       40
rain                40
rooster             40
sea_waves           40
sheep               40
siren               40
sneezing            40
snoring             40
thunderstorm        40
toilet_flush        40
train               40
vacuum_cleaner      40
washing_machine     40
water_drops         40
wind                40
Name: filename, dtype: int64
In [58]:
# ESC-10 categories
metadata_df[metadata_df['esc10']].groupby(['category', 'target'])['filename'].count().reset_index().sort_values(by='target')
Out[58]:
category target filename
4 dog 0 40
7 rooster 1 40
6 rain 10 40
8 sea_waves 11 40
2 crackling_fire 12 40
3 crying_baby 20 40
9 sneezing 21 40
1 clock_tick 38 40
5 helicopter 40 40
0 chainsaw 41 40
In [59]:
example_path = os.path.join(source_path, '1-21934-A-38.wav')

signal, sr = librosa.load(example_path, sr=None) # sr=None; load file with original sampling rate
sr = librosa.get_samplerate(example_path)
print('Sampling rate: ', sr)
duration = librosa.get_duration(signal, sr=sr)
print('Duration: ', duration)
Sampling rate:  44100
Duration:  5.0
In [60]:
librosa.display.waveshow(signal, sr=sr, alpha=0.4)
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.title('Waveform of Signal')
plt.show()
In [61]:
Audio(signal, rate=sr)
Out[61]:
Your browser does not support the audio element.
In [206]:
# Check signal length distribution over all files
raw_data = []

for file in tqdm(os.listdir(source_path)):
          
    file_path = os.path.join(source_path, file)
    signal, sr = librosa.load(file_path, sr=None) # None = Use original SR

    raw_data.append({
        'file': file,
        'signal': signal,
        'signal_length': len(signal),
        'org_sampling_rate': sr
    })
        
        
raw_data_df = pd.DataFrame(raw_data).sort_values(by=['file'])
raw_data_df.describe() # Verify that signal_length and sampling rate are the same across all files, as stated by original dataset paper
100%|██████████| 2000/2000 [00:08<00:00, 228.43it/s]
Out[206]:
signal_length org_sampling_rate
count 2000.0 2000.0
mean 220500.0 44100.0
std 0.0 0.0
min 220500.0 44100.0
25% 220500.0 44100.0
50% 220500.0 44100.0
75% 220500.0 44100.0
max 220500.0 44100.0
In [43]:
# Get metadata and files for esc10 data
metadata_esc10_df = metadata_df[metadata_df['esc10']]
esc10_files = metadata_esc10_df['filename'].values
esc10_files[:10]
Out[43]:
array(['1-100032-A-0.wav', '1-110389-A-0.wav', '1-116765-A-41.wav',
       '1-17150-A-12.wav', '1-172649-A-40.wav', '1-172649-B-40.wav',
       '1-172649-C-40.wav', '1-172649-D-40.wav', '1-172649-E-40.wav',
       '1-172649-F-40.wav'], dtype=object)
In [44]:
# Get first record for each category
example_files = metadata_esc10_df.groupby('category').first().reset_index()[['category', 'filename']].values
example_files[:10]
Out[44]:
array([['chainsaw', '1-116765-A-41.wav'],
       ['clock_tick', '1-21934-A-38.wav'],
       ['crackling_fire', '1-17150-A-12.wav'],
       ['crying_baby', '1-187207-A-20.wav'],
       ['dog', '1-100032-A-0.wav'],
       ['helicopter', '1-172649-A-40.wav'],
       ['rain', '1-17367-A-10.wav'],
       ['rooster', '1-26806-A-1.wav'],
       ['sea_waves', '1-28135-A-11.wav'],
       ['sneezing', '1-26143-A-21.wav']], dtype=object)
In [49]:
# Create Mel spectrogram plots for one example per category
fig, axs = plt.subplots(nrows=5, ncols=2, figsize=(12, 22))

# loop through digits and axes
for example_file, ax in zip(example_files, axs.ravel()):
    
    # Load file
    example_path = os.path.join(source_path, example_file[1])
    signal, sr = librosa.load(example_path, sr=None)

    # Create Mel spectrogram
    mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=2048, hop_length=512)
    mel_db = librosa.power_to_db(mel)
    
    # Display spectrogram
    img = librosa.display.specshow(mel_db, x_axis='time', y_axis='mel', ax=ax, sr=sr, hop_length=512, cmap='magma')
    
    # Format plot
    ax.set_title(f'Category: {example_file[0]}, File: {example_file[1]}')
    fig.colorbar(img, ax=ax, format='%+2.0f dB')

fig.tight_layout()
plt.show()
In [20]:
clock_files = metadata_esc10_df[metadata_df['category']=='clock_tick'][['category', 'filename']].values
clock_files[:10]
Out[20]:
array([['clock_tick', '1-21934-A-38.wav'],
       ['clock_tick', '1-21935-A-38.wav'],
       ['clock_tick', '1-35687-A-38.wav'],
       ['clock_tick', '1-42139-A-38.wav'],
       ['clock_tick', '1-48413-A-38.wav'],
       ['clock_tick', '1-57163-A-38.wav'],
       ['clock_tick', '1-62849-A-38.wav'],
       ['clock_tick', '1-62850-A-38.wav'],
       ['clock_tick', '2-119748-A-38.wav'],
       ['clock_tick', '2-127108-A-38.wav']], dtype=object)
In [21]:
# Create Mel spectrogram plots for all waterdrops (40 samples)
fig, axs = plt.subplots(nrows=20, ncols=2, figsize=(12, 90))

# loop through digits and axes
for example_file, ax in zip(clock_files, axs.ravel()):
    
    # Load file
    example_path = os.path.join(source_path, example_file[1])
    signal, sr = librosa.load(example_path, sr=None)

    # Create Mel spectrogram
    mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=2048, hop_length=512)
    mel_db = librosa.power_to_db(mel)
    
    # Display spectrogram
    img = librosa.display.specshow(mel_db, x_axis='time', y_axis='mel', ax=ax, sr=sr, hop_length=512, cmap='magma')
    
    # Format plot
    ax.set_title(f'Category: {example_file[0]}, File: {example_file[1]}')
    fig.colorbar(img, ax=ax, format='%+2.0f dB')

fig.tight_layout()
plt.show()
In [25]:
# Spectrogram for all dogs
dog_files = metadata_df[metadata_df['category']=='dog'][['category', 'filename']].values
dog_files[:10]
Out[25]:
array([['dog', '1-100032-A-0.wav'],
       ['dog', '1-110389-A-0.wav'],
       ['dog', '1-30226-A-0.wav'],
       ['dog', '1-30344-A-0.wav'],
       ['dog', '1-32318-A-0.wav'],
       ['dog', '1-59513-A-0.wav'],
       ['dog', '1-85362-A-0.wav'],
       ['dog', '1-97392-A-0.wav'],
       ['dog', '2-114280-A-0.wav'],
       ['dog', '2-114587-A-0.wav']], dtype=object)
In [26]:
# Example all dogs

# Create Mel spectrogram plots for all dogs (40 samples)
fig, axs = plt.subplots(nrows=20, ncols=2, figsize=(12, 90))

# loop through digits and axes
for example_file, ax in zip(dog_files, axs.ravel()):
    
    # Load file
    example_path = os.path.join(source_path, example_file[1])
    signal, sr = librosa.load(example_path, sr=None)

    # Create Mel spectrogram
    mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=2048, hop_length=512)
    mel_db = librosa.power_to_db(mel)
    
    # Display spectrogram
    img = librosa.display.specshow(mel_db, x_axis='time', y_axis='mel', ax=ax, sr=sr, hop_length=512, cmap='magma')
    
    # Format plot
    ax.set_title(f'Category: {example_file[0]}, File: {example_file[1]}')
    fig.colorbar(img, ax=ax, format='%+2.0f dB')

fig.tight_layout()
plt.show()
In [62]:
# Example dogs
example_path = os.path.join(source_path, '1-59513-A-0.wav')
signal, sr = librosa.load(example_path, sr=None)

Audio(signal, rate=sr)
Out[62]:
Your browser does not support the audio element.

2.2. Mel spectrogram model using CNN without data augmentation¶

Parameter grid search with on the fly mel spectrogram generation (i.e. part of training process)

In [36]:
n_fft = 2048 # Default of librosa
hop_length = 512 # Default of librosa, i.e. n_fft / 4
n_mels = 128 # Default of librosa
sr_all = 44100 # Original SR
num_classes = 10 # For ESC10

# Remap original labels for ESC10 data
label_map = {
    0: 0,  # dog
    1: 1,  # rooster
    10: 2, # rain
    11: 3, # sea_waves
    12: 4, # crackling_fire
    20: 5, # crying_baby
    21: 6, # sneezing
    38: 7, # clock_tick
    40: 8, # helicopter
    41: 9  # chainsaw
}

label_names = [
    'dog',
    'rooster',
    'rain',
    'sea_waves',
    'crackling_fire',
    'crying_baby',
    'sneezing',
    'clock_tick',
    'helicopter',
    'chainsaw'
]
In [178]:
# Load file
example_path = os.path.join(source_path, '1-59513-A-0.wav')
signal, sr = librosa.load(example_path, sr=sr_all)

# Create Mel spectrogram
mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
mel_db = librosa.power_to_db(mel)

print(mel_db.shape)
(128, 431)
In [56]:
# Create spectrograms for all ESC-10 files
preprocessed_data = []

for file in tqdm(esc10_files):
    
    # Load file
    file_path = os.path.join(source_path, file)
    signal, sr = librosa.load(file_path, sr=sr_all)

    # Create Mel Spectrogram and convert to db scale
    mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
    mel_db = librosa.power_to_db(mel)
    mel_db = mel_db.reshape(1, mel_db.shape[0], mel_db.shape[1])
    
    # Extract class label from filename
    label_org = int(file.split('-')[-1].split('.')[0])
    
    # Get new label from label map
    label = label_map[label_org]

    preprocessed_data.append({
        'file': file,
        'label': label,
        'signal': signal,
        'mel_db': mel_db
    })

preprocessed_data_df = pd.DataFrame(preprocessed_data)
preprocessed_data_df.head()
100%|██████████| 400/400 [00:05<00:00, 66.98it/s]
Out[56]:
file label signal mel_db
0 1-100032-A-0.wav 0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [[[-53.3022, -53.3022, -53.3022, -53.3022, -53...
1 1-110389-A-0.wav 0 [-0.0025634766, -0.0011291504, 0.00012207031, ... [[[-10.220087, 0.27027115, 2.8102493, 0.202547...
2 1-116765-A-41.wav 9 [-0.01260376, -0.015045166, -0.0154418945, -0.... [[[-22.876083, -27.223824, -31.730383, -29.084...
3 1-17150-A-12.wav 4 [0.0014343262, 0.0017700195, 0.0015563965, 0.0... [[[-19.44487, -17.593687, -19.15633, -17.98147...
4 1-172649-A-40.wav 8 [0.11935425, 0.1296997, 0.14428711, 0.20455933... [[[9.375204, 14.343754, 10.94474, 10.635153, 1...
In [57]:
# Save data
preprocessed_data_df.to_pickle('preprocessed_data_esc10.pkl')
In [179]:
preprocessed_data_df = pd.read_pickle('preprocessed_data_esc10.pkl')
In [180]:
# Distinct labels
np.sort(preprocessed_data_df['label'].unique())
Out[180]:
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int64)
In [181]:
# Spectrogram feature
X = preprocessed_data_df['mel_db'].values
X = np.concatenate(X, axis=0)
print(X.shape)

n_recs = X.shape[0]
n_rows = X.shape[1]
n_cols = X.shape[2]

X = X.reshape(n_recs, n_rows*n_cols) # Flatten for StandardScaling
print(X.shape)

# Target variable
y = preprocessed_data_df['label'].values

# To categorical - for softmax multiclass in keras
y = to_categorical(y, num_classes=num_classes)
print(y.shape)

# Filenames
files = preprocessed_data_df['file'].values
print(files.shape)

# Train/test split with stratify on y (we want all digits being evenly represented in train and test)
X_train, X_test, y_train, y_test, files_train, files_test = train_test_split(X, y, files, test_size=0.2, stratify=y, random_state=42)

# Split train set from above in train and valid set, so we have train, valid and test set
X_train, X_valid, y_train, y_valid, files_train, files_valid = train_test_split(X_train, y_train, files_train, test_size=0.2, stratify=y_train, random_state=42)
print('X_train: ', X_train.shape)
print('y_train: ', y_train.shape)
print('files_train: ', files_train.shape)
print('X_valid: ', X_valid.shape)
print('y_valid: ', y_valid.shape)
print('files_valid: ', files_valid.shape)
print('X_test: ', X_test.shape)
print('y_test: ', y_test.shape)
print('files_test: ', files_test.shape)

# Fit StandardScaler on training data
scaler = StandardScaler()
X_train_std = scaler.fit_transform(X_train)
X_valid_std = scaler.transform(X_valid)
X_test_std = scaler.transform(X_test)

X_train_std = X_train_std.reshape(X_train.shape[0], n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
X_valid_std = X_valid_std.reshape(X_valid.shape[0], n_rows, n_cols, 1)
X_test_std = X_test_std.reshape(X_test.shape[0], n_rows, n_cols, 1)

print(X_train_std.shape, X_valid_std.shape, X_test_std.shape)
(400, 128, 431)
(400, 55168)
(400, 10)
(400,)
X_train:  (256, 55168)
y_train:  (256, 10)
files_train:  (256,)
X_valid:  (64, 55168)
y_valid:  (64, 10)
files_valid:  (64,)
X_test:  (80, 55168)
y_test:  (80, 10)
files_test:  (80,)
(256, 128, 431, 1) (64, 128, 431, 1) (80, 128, 431, 1)
In [182]:
# Build and train CNN
model = Sequential([
    Conv2D(filters=64, kernel_size=10, strides=2, padding='same', activation='relu', input_shape=(n_rows, n_cols, 1)),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=32, kernel_size=10, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=32, kernel_size=5, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Flatten(),
    Dropout(0.5),
    Dense(units=150, activation='relu'),
    Dense(units=num_classes, activation='softmax')
])

model.summary()

# Model checkpoint to save best model
checkpoint_path = 'models/best_model_esc10_mel'
checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')

early_stopping = EarlyStopping(monitor='val_loss', patience=5)

# Compile the model with adam optimizer and default settings
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Fit
history = model.fit(X_train_std, y_train, epochs=20, validation_data=(X_valid_std, y_valid), batch_size=64, callbacks=[checkpoint])
Model: "sequential_9"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_4 (Conv2D)            (None, 64, 216, 64)       6464      
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 32, 108, 64)       0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 16, 54, 32)        204832    
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 8, 27, 32)         0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 4, 14, 32)         25632     
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 2, 7, 32)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 448)               0         
_________________________________________________________________
dropout (Dropout)            (None, 448)               0         
_________________________________________________________________
dense_19 (Dense)             (None, 150)               67350     
_________________________________________________________________
dense_20 (Dense)             (None, 10)                1510      
=================================================================
Total params: 305,788
Trainable params: 305,788
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
4/4 [==============================] - 5s 1s/step - loss: 2.3428 - accuracy: 0.1281 - val_loss: 2.1342 - val_accuracy: 0.2969

Epoch 00001: val_accuracy improved from -inf to 0.29688, saving model to models\best_model_esc10_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
Epoch 2/20
4/4 [==============================] - 5s 1s/step - loss: 2.1050 - accuracy: 0.2703 - val_loss: 1.7907 - val_accuracy: 0.2656

Epoch 00002: val_accuracy did not improve from 0.29688
Epoch 3/20
4/4 [==============================] - 5s 1s/step - loss: 1.8801 - accuracy: 0.3099 - val_loss: 1.5603 - val_accuracy: 0.5625

Epoch 00003: val_accuracy improved from 0.29688 to 0.56250, saving model to models\best_model_esc10_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
Epoch 4/20
4/4 [==============================] - 5s 1s/step - loss: 1.6746 - accuracy: 0.3750 - val_loss: 1.3852 - val_accuracy: 0.5312

Epoch 00004: val_accuracy did not improve from 0.56250
Epoch 5/20
4/4 [==============================] - 5s 1s/step - loss: 1.5879 - accuracy: 0.4052 - val_loss: 1.2118 - val_accuracy: 0.5000

Epoch 00005: val_accuracy did not improve from 0.56250
Epoch 6/20
4/4 [==============================] - 5s 1s/step - loss: 1.3191 - accuracy: 0.5135 - val_loss: 1.0898 - val_accuracy: 0.5469

Epoch 00006: val_accuracy did not improve from 0.56250
Epoch 7/20
4/4 [==============================] - 5s 1s/step - loss: 1.2582 - accuracy: 0.5740 - val_loss: 0.9915 - val_accuracy: 0.6719

Epoch 00007: val_accuracy improved from 0.56250 to 0.67188, saving model to models\best_model_esc10_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
Epoch 8/20
4/4 [==============================] - 5s 1s/step - loss: 1.0834 - accuracy: 0.5667 - val_loss: 1.0165 - val_accuracy: 0.6406

Epoch 00008: val_accuracy did not improve from 0.67188
Epoch 9/20
4/4 [==============================] - 6s 2s/step - loss: 0.9386 - accuracy: 0.6911 - val_loss: 1.0698 - val_accuracy: 0.5938

Epoch 00009: val_accuracy did not improve from 0.67188
Epoch 10/20
4/4 [==============================] - 6s 2s/step - loss: 1.0169 - accuracy: 0.5958 - val_loss: 0.8842 - val_accuracy: 0.6250

Epoch 00010: val_accuracy did not improve from 0.67188
Epoch 11/20
4/4 [==============================] - 6s 2s/step - loss: 0.8699 - accuracy: 0.6786 - val_loss: 0.7926 - val_accuracy: 0.7344

Epoch 00011: val_accuracy improved from 0.67188 to 0.73438, saving model to models\best_model_esc10_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
Epoch 12/20
4/4 [==============================] - 6s 2s/step - loss: 0.7613 - accuracy: 0.7224 - val_loss: 0.8939 - val_accuracy: 0.7656

Epoch 00012: val_accuracy improved from 0.73438 to 0.76562, saving model to models\best_model_esc10_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
Epoch 13/20
4/4 [==============================] - 6s 2s/step - loss: 0.8469 - accuracy: 0.6927 - val_loss: 0.7265 - val_accuracy: 0.7188

Epoch 00013: val_accuracy did not improve from 0.76562
Epoch 14/20
4/4 [==============================] - 7s 2s/step - loss: 0.6509 - accuracy: 0.7521 - val_loss: 0.7636 - val_accuracy: 0.7500

Epoch 00014: val_accuracy did not improve from 0.76562
Epoch 15/20
4/4 [==============================] - 7s 2s/step - loss: 0.5915 - accuracy: 0.7786 - val_loss: 0.7752 - val_accuracy: 0.7188

Epoch 00015: val_accuracy did not improve from 0.76562
Epoch 16/20
4/4 [==============================] - 7s 2s/step - loss: 0.7002 - accuracy: 0.7266 - val_loss: 0.8011 - val_accuracy: 0.7188

Epoch 00016: val_accuracy did not improve from 0.76562
Epoch 17/20
4/4 [==============================] - 7s 2s/step - loss: 0.5931 - accuracy: 0.7896 - val_loss: 0.6936 - val_accuracy: 0.7656

Epoch 00017: val_accuracy did not improve from 0.76562
Epoch 18/20
4/4 [==============================] - 7s 2s/step - loss: 0.4830 - accuracy: 0.8219 - val_loss: 0.7748 - val_accuracy: 0.7500

Epoch 00018: val_accuracy did not improve from 0.76562
Epoch 19/20
4/4 [==============================] - 7s 2s/step - loss: 0.5122 - accuracy: 0.8438 - val_loss: 0.6998 - val_accuracy: 0.8125

Epoch 00019: val_accuracy improved from 0.76562 to 0.81250, saving model to models\best_model_esc10_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_mel\assets
Epoch 20/20
4/4 [==============================] - 7s 2s/step - loss: 0.4693 - accuracy: 0.8193 - val_loss: 0.6584 - val_accuracy: 0.7969

Epoch 00020: val_accuracy did not improve from 0.81250
In [184]:
# Load best model
model = tf.keras.models.load_model(checkpoint_path)

# Plot training history
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))

# Plot training accuracy history
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('model accuracy')
axs[0].set_ylabel('accuracy')
axs[0].set_xlabel('epoch')
axs[0].set_ylim(0,1)
axs[0].legend(['train', 'val'], loc='lower right')

axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('model loss')
axs[1].set_ylabel('loss')
axs[1].set_xlabel('epoch')
axs[1].legend(['train', 'val'], loc='upper right')

plt.show()

test_loss, test_accuracy = model.evaluate(X_test_std, y_test)

print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')

y_pred_proba = model.predict(X_test_std)
y_pred_test = np.array([np.argmax(y) for y in y_pred_proba])
y_true_test = np.array([np.argmax(y) for y in y_test])

# Classification Report
print(classification_report(y_true_test, y_pred_test, target_names=label_names))

# Confusion matrix
fig, ax = plt.subplots(figsize=(10,10))
cmp = ConfusionMatrixDisplay.from_predictions(y_true_test, y_pred_test, display_labels=label_names, xticks_rotation='vertical', ax=ax)
plt.show()
3/3 [==============================] - 1s 125ms/step - loss: 0.6438 - accuracy: 0.7625
Test loss: 0.6437910795211792
Test accuracy: 0.762499988079071
                precision    recall  f1-score   support

           dog       1.00      0.50      0.67         8
       rooster       1.00      1.00      1.00         8
          rain       0.40      0.50      0.44         8
     sea_waves       0.62      0.62      0.62         8
crackling_fire       1.00      0.75      0.86         8
   crying_baby       1.00      1.00      1.00         8
      sneezing       0.73      1.00      0.84         8
    clock_tick       0.64      0.88      0.74         8
    helicopter       0.83      0.62      0.71         8
      chainsaw       0.75      0.75      0.75         8

      accuracy                           0.76        80
     macro avg       0.80      0.76      0.76        80
  weighted avg       0.80      0.76      0.76        80

2.3. Mel spectrogram model using CNN and data augmentation¶

Following data augmentation techniques are used:

  • Noise (white noise, i.e. all frequencies are affected with same intensity)
  • Time shifting (move to left/right and fill with silence)
  • Time stretching (speed up or slow down)
    • Audiomentions makes zero-right-padding if result is less than before; if its larger, keep up to max and cut rest
  • Pitch shifting (increase/decrease frequency)
  • Random gain (increase amplitude/loudness)

Many more augmentation techniques available (e.g. mixing in a background noise), check out: https://github.com/iver56/audiomentations

Augmentation is performed offline (i.e. not part of the CNN learning) and only on training data set and not on test and validation set. All augmentation is performed on signal and not on spectrogram.

Augmentation technique is chosen randomly and for each category the training data amount is doubled

Audiomentation Library is used to perform all augmentations and build augmentation pipeline

In [86]:
files_train[:15]
Out[86]:
array(['4-198965-A-38.wav', '5-221518-A-21.wav', '3-120644-A-12.wav',
       '2-118104-A-21.wav', '2-28314-B-12.wav', '4-164064-A-1.wav',
       '3-150979-A-40.wav', '2-96460-A-1.wav', '4-171519-A-21.wav',
       '3-142005-A-10.wav', '5-203739-A-10.wav', '5-170338-B-41.wav',
       '1-47273-A-21.wav', '1-40730-A-1.wav', '3-164688-A-38.wav'],
      dtype=object)
In [10]:
# Original example
example_path = os.path.join(source_path, '3-164688-A-38.wav') # clock tick
signal, sr = librosa.load(example_path, sr=None)

Audio(signal, rate=sr)
Out[10]:
Your browser does not support the audio element.
In [13]:
# Write stream to file
sf.write('aug_example/original.wav', signal, sr)
In [14]:
# Add Noise
augment = Compose([
    # p is the probability that the augmentation is used in the pipeline
    AddGaussianNoise(min_amplitude=0.01, max_amplitude=0.015, p=1.0)
])

# Augment/transform/perturb the audio data
augmented_signal = augment(samples=signal, sample_rate=sr)

# Display Waveform
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.waveshow(signal, sr=sr, alpha=0.4, ax=axs[0])
librosa.display.waveshow(augmented_signal, sr=sr, alpha=0.4, ax=axs[1])
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')

Audio(augmented_signal, rate=sr)
Out[14]:
Your browser does not support the audio element.
In [15]:
# Write augmented stream to file
sf.write('aug_example/noise.wav', augmented_signal, sr)
In [18]:
# Time shifting
augment = Compose([
    Shift(min_fraction=-1, max_fraction=1, p=1.0), # With default rollover = true, see: https://github.com/iver56/audiomentations/blob/master/audiomentations/augmentations/shift.py
])

# Augment/transform/perturb the audio data
augmented_signal = augment(samples=signal, sample_rate=sr)

# Display Waveform
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.waveshow(signal, sr=sr, alpha=0.4, ax=axs[0])
librosa.display.waveshow(augmented_signal, sr=sr, alpha=0.4, ax=axs[1])
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')

Audio(augmented_signal, rate=sr)
Out[18]:
Your browser does not support the audio element.
In [19]:
# Write augmented stream to file
sf.write('aug_example/shift.wav', augmented_signal, sr)
In [21]:
# Time stretching (speeding up or slowing down)
augment = Compose([
    # min_rate/max_rate, rate by how much signal is stretched (e.g. 0.5 is half the original speed)
    TimeStretch(min_rate=0.5, max_rate=1.5, p=1.0) 
])

# Augment/transform/perturb the audio data
augmented_signal = augment(samples=signal, sample_rate=sr)

# Display Waveform
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.waveshow(signal, sr=sr, alpha=0.4, ax=axs[0])
librosa.display.waveshow(augmented_signal, sr=sr, alpha=0.4, ax=axs[1])
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')

# Signal after augmentation has still the same length, but is right-padded with silence
print('Augmented Signal Length: ', librosa.get_duration(augmented_signal, sr=sr))
print('Original Signal Length: ', librosa.get_duration(signal, sr=sr))

Audio(augmented_signal, rate=sr)
Augmented Signal Length:  5.0
Original Signal Length:  5.0
Out[21]:
Your browser does not support the audio element.
In [22]:
# Write augmented stream to file
sf.write('aug_example/stretch.wav', augmented_signal, sr)
In [23]:
# Pitch shifting (increase/decrease frequency), no change to tempo
augment = Compose([
    PitchShift(min_semitones=-12, max_semitones=12, p=1.0)
])

augmented_signal = augment(samples=signal, sample_rate=sr)

# Display Waveform
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.waveshow(signal, sr=sr, alpha=0.4, ax=axs[0])
librosa.display.waveshow(augmented_signal, sr=sr, alpha=0.4, ax=axs[1])
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')

Audio(augmented_signal, rate=sr)
Out[23]:
Your browser does not support the audio element.
In [24]:
# Write augmented stream to file
sf.write('aug_example/pitch.wav', augmented_signal, sr)
In [25]:
# Gain (increase/decrease loudness)
augment = Compose([
    Gain(min_gain_in_db=-10, max_gain_in_db=10, p=1.0)
])

augmented_signal = augment(samples=signal, sample_rate=sr)

# Display Waveform
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.waveshow(signal, sr=sr, alpha=0.4, ax=axs[0])
librosa.display.waveshow(augmented_signal, sr=sr, alpha=0.4, ax=axs[1])
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')

Audio(augmented_signal, rate=sr, normalize=False)
Out[25]:
Your browser does not support the audio element.
In [26]:
# Write augmented stream to file
sf.write('aug_example/gain.wav', augmented_signal, sr)
In [30]:
# Pipeline with all augmentations, whereas every augmentation is performed with p=1.0
augment = Compose([
    AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=1.0),
    Shift(min_fraction=-1, max_fraction=1, p=1.0),
    TimeStretch(min_rate=0.5, max_rate=1.5, p=1.0),
    PitchShift(min_semitones=-12, max_semitones=12, p=1.0),
    Gain(min_gain_in_db=-10, max_gain_in_db=10, p=1.0)
])

augmented_signal = augment(samples=signal, sample_rate=sr)

# Display Waveform
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.waveshow(signal, sr=sr, alpha=0.4, ax=axs[0])
librosa.display.waveshow(augmented_signal, sr=sr, alpha=0.4, ax=axs[1])
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')

Audio(augmented_signal, rate=sr, normalize=False)
Out[30]:
Your browser does not support the audio element.
In [31]:
# Write augmented stream to file
sf.write('aug_example/combine.wav', augmented_signal, sr)
In [32]:
# Another example augmentations
example_path = os.path.join(source_path, '1-17124-A-43.wav')
signal, sr = librosa.load(example_path, sr=None)

Audio(signal, rate=sr)
Out[32]:
Your browser does not support the audio element.
In [34]:
augmented_signal = augment(samples=signal, sample_rate=sr)

# Display Waveform
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.waveshow(signal, sr=sr, alpha=0.4, ax=axs[0])
librosa.display.waveshow(augmented_signal, sr=sr, alpha=0.4, ax=axs[1])
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')

Audio(augmented_signal, rate=sr, normalize=False)
Out[34]:
Your browser does not support the audio element.
In [37]:
# Create Mel spectrogram original and augmented
mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
mel_db = librosa.power_to_db(mel)

mel_aug = librosa.feature.melspectrogram(augmented_signal, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
mel_db_aug = librosa.power_to_db(mel_aug)

fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
librosa.display.specshow(mel_db, x_axis='time', y_axis='mel', ax=axs[0], sr=sr, hop_length=hop_length, cmap='magma')
librosa.display.specshow(mel_db_aug, x_axis='time', y_axis='mel', ax=axs[1], sr=sr, hop_length=hop_length, cmap='magma')
axs[0].set_title('Original Signal')
axs[1].set_title('Augmented Signal')
plt.show()
In [97]:
preprocessed_augmented_data = []

# Iterate through training files and create an augmented version for each, i.e. double the amount of training data
for file_train in tqdm(files_train):
    
    # Load file
    file_path = os.path.join(source_path, file_train)
    signal, sr = librosa.load(file_path, sr=sr_all)

    # Augment data
    augmented_signal = augment(samples=signal, sample_rate=sr)

    # Create Mel Spectrogram and convert to db scale
    mel = librosa.feature.melspectrogram(augmented_signal, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
    mel_db = librosa.power_to_db(mel)
    mel_db = mel_db.reshape(1, mel_db.shape[0], mel_db.shape[1])
    
    # Extract class label from file name
    label_org = int(file_train.split('-')[-1].split('.')[0])
    
    # Get new label from label map
    label = label_map[label_org]

    preprocessed_augmented_data.append({
        'original_file': file_train,
        'label': label,
        'signal': augmented_signal,
        'mel_db': mel_db
    })

preprocessed_augmented_data_df = pd.DataFrame(preprocessed_augmented_data)
preprocessed_augmented_data_df.head()
100%|██████████| 256/256 [02:15<00:00,  1.89it/s]
Out[97]:
original_file label signal mel_db
0 4-198965-A-38.wav 7 [-0.0054802597, -0.0038028287, 0.0004848113, -... [[[-21.934002, -19.716337, -21.716616, -27.575...
1 5-221518-A-21.wav 6 [-0.0008060812, -0.0067617483, -0.0058536627, ... [[[-36.12893, -36.12439, -35.698, -33.352333, ...
2 3-120644-A-12.wav 4 [-0.0012799372, 0.00018340613, 0.00279705, 0.0... [[[0.7032878, 3.9970121, 4.408907, 2.013706, -...
3 2-118104-A-21.wav 6 [0.00020468842, -0.0016054737, -0.0004374784, ... [[[-38.96955, -38.305336, -42.408672, -46.3244...
4 2-28314-B-12.wav 4 [-0.0023565497, 0.013027733, 0.032121494, 0.02... [[[-13.477949, -6.6325307, -6.4565616, -6.2998...
In [98]:
# Save data
preprocessed_augmented_data_df.to_pickle('preprocessed_augmented_data_esc10.pkl')
In [99]:
preprocessed_augmented_data_df.count()
Out[99]:
original_file    256
label            256
signal           256
mel_db           256
dtype: int64
In [185]:
# Load data
preprocessed_augmented_data_df = pd.read_pickle('preprocessed_augmented_data_esc10.pkl')
In [186]:
X_aug = preprocessed_augmented_data_df['mel_db'].values
X_aug = np.concatenate(X_aug, axis=0)

n_aug_recs = X_aug.shape[0]
n_aug_rows = X_aug.shape[1]
n_aug_cols = X_aug.shape[2]

X_aug = X_aug.reshape(n_aug_recs, n_aug_rows*n_aug_cols) # Flatten for StandardScaling
X_aug.shape
Out[186]:
(256, 55168)
In [187]:
# Concat original train and augmented train data
X_train_all = np.concatenate([X_aug, X_train], axis=0)
X_train_all.shape
Out[187]:
(512, 55168)
In [188]:
# To categorical of augmented labels and validation labels and concat with original train labels
y_aug = preprocessed_augmented_data_df['label'].values
y_aug = to_categorical(y_aug, num_classes=num_classes)
y_aug.shape
Out[188]:
(256, 10)
In [189]:
y_train_all = np.concatenate([y_aug, y_train], axis=0)
y_train_all.shape
Out[189]:
(512, 10)
In [190]:
# Fit StandardScaler on training data incl. augmented data and transform valid and test set
scaler = StandardScaler()
X_train_all_std = scaler.fit_transform(X_train_all)
X_valid_std = scaler.transform(X_valid)
X_test_std = scaler.transform(X_test)

X_train_all_std = X_train_all_std.reshape(X_train_all.shape[0], n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
X_valid_std = X_valid_std.reshape(X_valid.shape[0], n_rows, n_cols, 1)
X_test_std = X_test_std.reshape(X_test.shape[0], n_rows, n_cols, 1)

print(X_train_all_std.shape, X_valid_std.shape, X_test_std.shape)
(512, 128, 431, 1) (64, 128, 431, 1) (80, 128, 431, 1)
In [195]:
# Build and train CNN
model = Sequential([
    Conv2D(filters=64, kernel_size=10, strides=2, padding='same', activation='relu', input_shape=(n_rows, n_cols, 1)),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=32, kernel_size=10, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=32, kernel_size=5, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Flatten(),
    Dropout(0.5),
    Dense(units=150, activation='relu'),
    Dense(units=num_classes, activation='softmax')
])

model.summary()

# Model checkpoint to save best model
checkpoint_path = 'models/best_model_esc10_aug_mel'
checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')

early_stopping = EarlyStopping(monitor='val_loss', patience=5)

# Compile the model with adam optimizer and default settings
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Fit
history = model.fit(X_train_all_std, y_train_all, epochs=20, validation_data=(X_valid_std, y_valid), batch_size=64, callbacks=[checkpoint])
Model: "sequential_12"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_13 (Conv2D)           (None, 64, 216, 64)       6464      
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 32, 108, 64)       0         
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 16, 54, 32)        204832    
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 8, 27, 32)         0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 4, 14, 32)         25632     
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 2, 7, 32)          0         
_________________________________________________________________
flatten_5 (Flatten)          (None, 448)               0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 448)               0         
_________________________________________________________________
dense_25 (Dense)             (None, 150)               67350     
_________________________________________________________________
dense_26 (Dense)             (None, 10)                1510      
=================================================================
Total params: 305,788
Trainable params: 305,788
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
8/8 [==============================] - 12s 1s/step - loss: 2.2706 - accuracy: 0.1487 - val_loss: 1.8437 - val_accuracy: 0.2656

Epoch 00001: val_accuracy improved from -inf to 0.26562, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 2/20
8/8 [==============================] - 13s 2s/step - loss: 2.0110 - accuracy: 0.2199 - val_loss: 1.6305 - val_accuracy: 0.2812

Epoch 00002: val_accuracy improved from 0.26562 to 0.28125, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 3/20
8/8 [==============================] - 14s 2s/step - loss: 1.8502 - accuracy: 0.3206 - val_loss: 1.3964 - val_accuracy: 0.4062

Epoch 00003: val_accuracy improved from 0.28125 to 0.40625, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 4/20
8/8 [==============================] - 14s 2s/step - loss: 1.6290 - accuracy: 0.4078 - val_loss: 1.2623 - val_accuracy: 0.5781

Epoch 00004: val_accuracy improved from 0.40625 to 0.57812, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 5/20
8/8 [==============================] - 14s 2s/step - loss: 1.4048 - accuracy: 0.4573 - val_loss: 1.0401 - val_accuracy: 0.6250

Epoch 00005: val_accuracy improved from 0.57812 to 0.62500, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 6/20
8/8 [==============================] - 15s 2s/step - loss: 1.2794 - accuracy: 0.5080 - val_loss: 0.9510 - val_accuracy: 0.6250

Epoch 00006: val_accuracy did not improve from 0.62500
Epoch 7/20
8/8 [==============================] - 18s 2s/step - loss: 1.1396 - accuracy: 0.6129 - val_loss: 0.8842 - val_accuracy: 0.6406

Epoch 00007: val_accuracy improved from 0.62500 to 0.64062, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 8/20
8/8 [==============================] - 18s 2s/step - loss: 1.0376 - accuracy: 0.6439 - val_loss: 0.8334 - val_accuracy: 0.6562

Epoch 00008: val_accuracy improved from 0.64062 to 0.65625, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 9/20
8/8 [==============================] - 19s 2s/step - loss: 1.0272 - accuracy: 0.6393 - val_loss: 0.7389 - val_accuracy: 0.6719

Epoch 00009: val_accuracy improved from 0.65625 to 0.67188, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 10/20
8/8 [==============================] - 19s 2s/step - loss: 0.8683 - accuracy: 0.6837 - val_loss: 0.7114 - val_accuracy: 0.7188

Epoch 00010: val_accuracy improved from 0.67188 to 0.71875, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 11/20
8/8 [==============================] - 21s 3s/step - loss: 0.7881 - accuracy: 0.7112 - val_loss: 0.7470 - val_accuracy: 0.7188

Epoch 00011: val_accuracy did not improve from 0.71875
Epoch 12/20
8/8 [==============================] - 20s 2s/step - loss: 0.7363 - accuracy: 0.7387 - val_loss: 0.6132 - val_accuracy: 0.7969

Epoch 00012: val_accuracy improved from 0.71875 to 0.79688, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 13/20
8/8 [==============================] - 18s 2s/step - loss: 0.6832 - accuracy: 0.7514 - val_loss: 0.6247 - val_accuracy: 0.7656

Epoch 00013: val_accuracy did not improve from 0.79688
Epoch 14/20
8/8 [==============================] - 18s 2s/step - loss: 0.6359 - accuracy: 0.7694 - val_loss: 0.5656 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.79688
Epoch 15/20
8/8 [==============================] - 17s 2s/step - loss: 0.5595 - accuracy: 0.7995 - val_loss: 0.7144 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.79688
Epoch 16/20
8/8 [==============================] - 19s 2s/step - loss: 0.4937 - accuracy: 0.8137 - val_loss: 0.6433 - val_accuracy: 0.7812

Epoch 00016: val_accuracy did not improve from 0.79688
Epoch 17/20
8/8 [==============================] - 22s 3s/step - loss: 0.4023 - accuracy: 0.8463 - val_loss: 0.7416 - val_accuracy: 0.7812

Epoch 00017: val_accuracy did not improve from 0.79688
Epoch 18/20
8/8 [==============================] - 20s 3s/step - loss: 0.3824 - accuracy: 0.8685 - val_loss: 0.7539 - val_accuracy: 0.7812

Epoch 00018: val_accuracy did not improve from 0.79688
Epoch 19/20
8/8 [==============================] - 17s 2s/step - loss: 0.3731 - accuracy: 0.8633 - val_loss: 0.6831 - val_accuracy: 0.8125

Epoch 00019: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
Epoch 20/20
8/8 [==============================] - 17s 2s/step - loss: 0.3562 - accuracy: 0.8674 - val_loss: 0.5374 - val_accuracy: 0.8281

Epoch 00020: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_aug_mel
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_aug_mel\assets
In [197]:
# Load best model
model = tf.keras.models.load_model(checkpoint_path)

# Plot training history
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))

# Plot training accuracy history
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('model accuracy')
axs[0].set_ylabel('accuracy')
axs[0].set_xlabel('epoch')
axs[0].set_ylim(0,1)
axs[0].legend(['train', 'val'], loc='lower right')

axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('model loss')
axs[1].set_ylabel('loss')
axs[1].set_xlabel('epoch')
axs[1].legend(['train', 'val'], loc='upper right')

plt.show()

test_loss, test_accuracy = model.evaluate(X_test_std, y_test)

print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')

y_pred_proba = model.predict(X_test_std)
y_pred_test = np.array([np.argmax(y) for y in y_pred_proba])
y_true_test = np.array([np.argmax(y) for y in y_test])

# Classification Report
print(classification_report(y_true_test, y_pred_test, target_names=label_names))

# Confusion matrix
fig, ax = plt.subplots(figsize=(10,10))
cmp = ConfusionMatrixDisplay.from_predictions(y_true_test, y_pred_test, display_labels=label_names, xticks_rotation='vertical', ax=ax)
plt.show()
3/3 [==============================] - 1s 137ms/step - loss: 0.7091 - accuracy: 0.8500
Test loss: 0.7091293334960938
Test accuracy: 0.8500000238418579
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_predict_function.<locals>.predict_function at 0x00000223B9E86040> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_predict_function.<locals>.predict_function at 0x00000223B9E86040> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
                precision    recall  f1-score   support

           dog       1.00      0.75      0.86         8
       rooster       1.00      1.00      1.00         8
          rain       1.00      0.50      0.67         8
     sea_waves       0.60      0.75      0.67         8
crackling_fire       1.00      0.62      0.77         8
   crying_baby       1.00      1.00      1.00         8
      sneezing       0.80      1.00      0.89         8
    clock_tick       0.67      1.00      0.80         8
    helicopter       0.88      0.88      0.88         8
      chainsaw       0.89      1.00      0.94         8

      accuracy                           0.85        80
     macro avg       0.88      0.85      0.85        80
  weighted avg       0.88      0.85      0.85        80

2.4. Multiple experiments with CNN and augmentation¶

Run a set of experiments with different augmentations, keep feature extraction and CNN fixed

  • Without augmentation (= 256 training data records)
  • Create 1 augmented record for each original training data record (= 512 training data records in total)
  • Create 5 augmented records for each original training data (= 1536 training data records in total)
In [2]:
source_path = Path('../ESC-50-master/audio')
metadata_path = os.path.join('../ESC-50-master/meta/esc50.csv')
metadata_df = pd.read_csv(metadata_path)
metadata_esc10_df = metadata_df[metadata_df['esc10']]
esc10_files = metadata_esc10_df['filename'].values
In [3]:
def extract_mel_spectrogram(sr, n_fft, hop_length, n_mels):
    
    # Create spectrograms for all ESC-10 files
    preprocessed_data = []

    for file in tqdm(esc10_files):

        # Load file
        file_path = os.path.join(source_path, file)
        signal, sr = librosa.load(file_path, sr=sr)

        # Create Mel Spectrogram and convert to db scale
        mel = librosa.feature.melspectrogram(signal, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
        mel_db = librosa.power_to_db(mel)
        mel_db = mel_db.reshape(1, mel_db.shape[0], mel_db.shape[1])

        # Extract class label from filename
        label_org = int(file.split('-')[-1].split('.')[0])

        # Get new label from label map
        label = label_map[label_org]

        preprocessed_data.append({
            'file': file,
            'label': label,
            'signal': signal,
            'mel_db': mel_db
        })

    preprocessed_data_df = pd.DataFrame(preprocessed_data)
    
    return preprocessed_data_df
In [4]:
def split_data(preprocessed_data_df):
    
    # Spectrogram feature
    X = preprocessed_data_df['mel_db'].values
    X = np.concatenate(X, axis=0)

    n_recs = X.shape[0]
    n_rows = X.shape[1]
    n_cols = X.shape[2]

    X = X.reshape(n_recs, n_rows*n_cols) # Flatten for StandardScaling

    # Target variable
    y = preprocessed_data_df['label'].values
    y = to_categorical(y, num_classes=num_classes)

    # Filenames
    files = preprocessed_data_df['file'].values

    # Train/test split with stratify on y (we want all digits being evenly represented in train and test)
    X_train, X_test, y_train, y_test, files_train, files_test = train_test_split(X, y, files, test_size=0.2, stratify=y, random_state=42)
    
    # Split train set from above in train and valid set, so we have train, valid and test set
    X_train, X_valid, y_train, y_valid, files_train, files_valid = train_test_split(X_train, y_train, files_train, test_size=0.2, stratify=y_train, random_state=42)

    # Fit StandardScaler on training data
    scaler = StandardScaler()
    X_train_std = scaler.fit_transform(X_train)
    X_valid_std = scaler.transform(X_valid)
    X_test_std = scaler.transform(X_test)

    X_train_std = X_train_std.reshape(X_train.shape[0], n_rows, n_cols, 1) # Reshape to 4D - needed for CCN
    X_valid_std = X_valid_std.reshape(X_valid.shape[0], n_rows, n_cols, 1)
    X_test_std = X_test_std.reshape(X_test.shape[0], n_rows, n_cols, 1)

    return X_train_std, X_valid_std, X_test_std, X_train, X_valid, X_test, y_train, y_valid, y_test, files_train
In [5]:
def create_augmented_data(n_augmentation_per_train,
                          p_per_augmentation,
                          X_train,
                          X_valid,
                          X_test,
                          y_train,
                          files_train, 
                          sr, 
                          n_fft, 
                          hop_length, 
                          n_mels):
    
    # Pipeline with all augmentations, whereas every augmentation is performed with p=1.0
    augment = Compose([
        AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=p_per_augmentation),
        Shift(min_fraction=-1, max_fraction=1, p=p_per_augmentation),
        TimeStretch(min_rate=0.5, max_rate=1.5, p=p_per_augmentation),
        PitchShift(min_semitones=-12, max_semitones=12, p=p_per_augmentation),
        Gain(min_gain_in_db=-10, max_gain_in_db=10, p=p_per_augmentation)
    ])
    
    preprocessed_augmented_data = []

    # Iterate through training files and create an augmented version for each, i.e. double the amount of training data
    for file_train in tqdm(files_train):
        
        # Load file
        file_path = os.path.join(source_path, file_train)
        signal, sr = librosa.load(file_path, sr=sr)
        
        # For each training file, create n_augmentation_per_train augmented file
        for i in range(n_augmentation_per_train):      

            # Augment data
            augmented_signal = augment(samples=signal, sample_rate=sr)

            # Create Mel Spectrogram and convert to db scale
            mel = librosa.feature.melspectrogram(augmented_signal, sr=sr, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)
            mel_db = librosa.power_to_db(mel)
            mel_db = mel_db.reshape(1, mel_db.shape[0], mel_db.shape[1])

            # Extract class label from file name
            label_org = int(file_train.split('-')[-1].split('.')[0])

            # Get new label from label map
            label = label_map[label_org]

            preprocessed_augmented_data.append({
                'original_file': file_train,
                'label': label,
                'signal': augmented_signal,
                'mel_db': mel_db
            })

    preprocessed_augmented_data_df = pd.DataFrame(preprocessed_augmented_data)
    
    X_aug = preprocessed_augmented_data_df['mel_db'].values
    X_aug = np.concatenate(X_aug, axis=0)

    n_aug_recs = X_aug.shape[0]
    n_aug_rows = X_aug.shape[1]
    n_aug_cols = X_aug.shape[2]

    X_aug = X_aug.reshape(n_aug_recs, n_aug_rows*n_aug_cols) # Flatten for StandardScaling
    X_train_all = np.concatenate([X_aug, X_train], axis=0)
    
    # To categorical of augmented labels and validation labels and concat with original train labels
    y_aug = preprocessed_augmented_data_df['label'].values
    y_aug = to_categorical(y_aug, num_classes=num_classes)
    y_train_all = np.concatenate([y_aug, y_train], axis=0)
       
    # Fit StandardScaler on training data incl. augmented data and transform valid and test set
    scaler = StandardScaler()
    X_train_all_std = scaler.fit_transform(X_train_all)
    X_valid_std = scaler.transform(X_valid)
    X_test_std = scaler.transform(X_test)

    X_train_all_std = X_train_all_std.reshape(X_train_all.shape[0], n_aug_rows, n_aug_cols, 1) # Reshape to 4D - needed for CCN
    X_valid_std = X_valid_std.reshape(X_valid.shape[0], n_aug_rows, n_aug_cols, 1)
    X_test_std = X_test_std.reshape(X_test.shape[0], n_aug_rows, n_aug_cols, 1)
    
    return X_train_all_std, X_valid_std, X_test_std, y_train_all
In [6]:
def build_run_training(experiment_id,
                       repetition_id,
                       X_train_std,
                       X_valid_std,
                       X_test_std,
                       y_train,
                       y_valid,
                       y_test,
                       n_filters_l1, 
                       n_filters_l2, 
                       n_filters_l3, 
                       n_dense_layer,
                       batch_size,
                       epochs):
    
    n_rows = X_train_std.shape[1]
    n_cols = X_train_std.shape[2]
    
    # Build and train CNN
    model = Sequential([
        Conv2D(filters=n_filters_l1, kernel_size=10, strides=2, padding='same', activation='relu', input_shape=(n_rows, n_cols, 1)),
        MaxPool2D(pool_size=2, strides=2, padding='same'),
        Conv2D(filters=n_filters_l2, kernel_size=10, strides=2, padding='same', activation='relu'),
        MaxPool2D(pool_size=2, strides=2, padding='same'),
        Conv2D(filters=n_filters_l3, kernel_size=5, strides=2, padding='same', activation='relu'),
        MaxPool2D(pool_size=2, strides=2, padding='same'),
        Flatten(),
        Dropout(0.5),
        Dense(units=n_dense_layer, activation='relu'),
        Dense(units=num_classes, activation='softmax')
    ])

    # Model checkpoint to save best model
    checkpoint_path = f'models/best_model_esc10_exp_{experiment_id}_{repetition_id}'
    checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max', save_weights_only=True)

    # Compile the model with adam optimizer and default settings
    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

    # Fit
    history = model.fit(X_train_std, y_train, epochs=epochs, validation_data=(X_valid_std, y_valid), batch_size=batch_size, callbacks=[checkpoint], verbose=1)
    
    # Load best weights
    model.load_weights(checkpoint_path)
    
    # Test
    test_loss, test_accuracy = model.evaluate(X_test_std, y_test, verbose=0)
    
    return history, test_accuracy
In [7]:
# Define some constants
num_classes = 10 # For ESC10
n_repetitions = 20 # Number of repetitions for same experiment

# Remap original labels for ESC10 data
label_map = {
    0: 0,  # dog
    1: 1,  # rooster
    10: 2, # rain
    11: 3, # sea_waves
    12: 4, # crackling_fire
    20: 5, # crying_baby
    21: 6, # sneezing
    38: 7, # clock_tick
    40: 8, # helicopter
    41: 9  # chainsaw
}

label_names = [
    'dog',
    'rooster',
    'rain',
    'sea_waves',
    'crackling_fire',
    'crying_baby',
    'sneezing',
    'clock_tick',
    'helicopter',
    'chainsaw'
]

# Define different augmentation setups, i.e. number of augmentation per training dataset and p per augmentation
feature_extraction_setups = [
    {
        'sr': 44100,
        'n_fft': 2048,
        'hop_length': 512,
        'n_mels': 128
    }
]

augmentation_setups = [
    {
        'n_augmentation_per_train': 0, # No augmentation
        'p_per_augmentation': 0.0
    },
    {
        'n_augmentation_per_train': 1,
        'p_per_augmentation': 0.5
    },
    {
        'n_augmentation_per_train': 5,
        'p_per_augmentation': 0.5
    }
]

cnn_setups = [
    {
        'n_filters_l1': 64,
        'n_filters_l2': 32,
        'n_filters_l3': 32,
        'n_dense_layer': 150,
        'batch_size': 64,
        'epochs': 20
    }
]
In [9]:
%%time

# Loop through feature extraction setups
experiment_id = 0

experiment_results = []

for feature_extraction in feature_extraction_setups:
    
    print(feature_extraction)
    sr = feature_extraction['sr']
    n_fft = feature_extraction['n_fft']
    hop_length = feature_extraction['hop_length']
    n_mels = feature_extraction['n_mels']
    
    # Extract features
    preprocessed_data_df = extract_mel_spectrogram(sr, n_fft, hop_length, n_mels)
    
    # Create train/test split
    X_train_std_org, X_valid_std_org, X_test_std_org, X_train_org, X_valid_org, X_test_org, y_train_org, y_valid_org, y_test_org, files_train = split_data(preprocessed_data_df)
    print('Shape before augmentation: ', X_train_std_org.shape, X_valid_std_org.shape, X_test_std_org.shape)

    # Loop through augmentation setups
    for augmentation in augmentation_setups:
        
        print(augmentation)
        n_augmentation_per_train = augmentation['n_augmentation_per_train']
        p_per_augmentation = augmentation['p_per_augmentation']
        
        if n_augmentation_per_train != 0:
        
            X_train_std, X_valid_std, X_test_std, y_train = create_augmented_data(n_augmentation_per_train,
                                                                                  p_per_augmentation,
                                                                                  X_train_org,
                                                                                  X_valid_org,
                                                                                  X_test_org,
                                                                                  y_train_org,
                                                                                  files_train, 
                                                                                  sr, 
                                                                                  n_fft, 
                                                                                  hop_length, 
                                                                                  n_mels)

        else:
            
            X_train_std = X_train_std_org
            X_valid_std = X_valid_std_org
            X_test_std = X_test_std_org
            y_train = y_train_org
            
        print('Shape after augmentation: ', X_train_std.shape, y_train.shape, X_valid_std.shape, X_test_std.shape)
        
        # Save X_test_std and y_test_org with experiment_id for later test analysis with best model
        # Save is needed because test data gets scaled based on training data and training data is influenced by augmentation
        np.save(f'X_test_std_{experiment_id}', X_test_std)
        np.save(f'y_test_org_{experiment_id}', y_test_org)
        
        # Loop trough CNN setups
        for cnn in cnn_setups:
            
            print(cnn)
            n_filters_l1 = cnn['n_filters_l1']
            n_filters_l2 = cnn['n_filters_l2']
            n_filters_l3 = cnn['n_filters_l3']
            n_dense_layer = cnn['n_dense_layer']
            batch_size = cnn['batch_size']
            epochs = cnn['epochs']  
            
            for repetition_id in range(n_repetitions):
            
                history, test_accuracy = build_run_training(experiment_id,
                                                            repetition_id,
                                                            X_train_std,
                                                            X_valid_std,
                                                            X_test_std,
                                                            y_train,
                                                            y_valid_org,
                                                            y_test_org,
                                                            n_filters_l1, 
                                                            n_filters_l2, 
                                                            n_filters_l3, 
                                                            n_dense_layer,
                                                            batch_size,
                                                            epochs)

                print('Test accuracy: ', test_accuracy)
            
                # Add everything to experiment results
                experiment_results.append({
                    'experiment_id': experiment_id,
                    'repetition_id': repetition_id,
                    'sr': sr,
                    'n_fft': n_fft,
                    'hop_length': hop_length,
                    'n_mels': n_mels,
                    'n_augmentation_per_train': n_augmentation_per_train,
                    'p_per_augmentation': p_per_augmentation,
                    'n_filters_l1': n_filters_l1,
                    'n_filters_l2': n_filters_l2,
                    'n_filters_l3': n_filters_l3,
                    'n_dense_layer': n_dense_layer,
                    'batch_size': batch_size,
                    'epochs': epochs,
                    'history_accuracy': history.history['accuracy'],
                    'history_val_accuracy': history.history['val_accuracy'],
                    'history_loss': history.history['loss'],
                    'history_val_loss': history.history['val_loss'],
                    'test_accuracy': test_accuracy
                })
            
            experiment_id += 1
            
experiment_results_df = pd.DataFrame(experiment_results)

# Save experiment results and Testdata
experiment_results_df.to_pickle('experiment_results_df.pkl')
experiment_results_df.head()
{'sr': 44100, 'n_fft': 2048, 'hop_length': 512, 'n_mels': 128}
100%|██████████| 400/400 [00:05<00:00, 66.94it/s]
Shape before augmentation:  (256, 128, 431, 1) (64, 128, 431, 1) (80, 128, 431, 1)
{'n_augmentation_per_train': 0, 'p_per_augmentation': 0.0}
Shape after augmentation:  (256, 128, 431, 1) (256, 10) (64, 128, 431, 1) (80, 128, 431, 1)
{'n_filters_l1': 64, 'n_filters_l2': 32, 'n_filters_l3': 32, 'n_dense_layer': 150, 'batch_size': 64, 'epochs': 20}
Epoch 1/20
4/4 [==============================] - 6s 2s/step - loss: 2.3424 - accuracy: 0.0969 - val_loss: 2.1278 - val_accuracy: 0.1875

Epoch 00001: val_accuracy improved from -inf to 0.18750, saving model to models\best_model_esc10_exp_0_0
Epoch 2/20
4/4 [==============================] - 6s 1s/step - loss: 2.1333 - accuracy: 0.2021 - val_loss: 1.9561 - val_accuracy: 0.2031

Epoch 00002: val_accuracy improved from 0.18750 to 0.20312, saving model to models\best_model_esc10_exp_0_0
Epoch 3/20
4/4 [==============================] - 6s 1s/step - loss: 1.9337 - accuracy: 0.2271 - val_loss: 1.7738 - val_accuracy: 0.3438

Epoch 00003: val_accuracy improved from 0.20312 to 0.34375, saving model to models\best_model_esc10_exp_0_0
Epoch 4/20
4/4 [==============================] - 5s 1s/step - loss: 1.7842 - accuracy: 0.3458 - val_loss: 1.6229 - val_accuracy: 0.3906

Epoch 00004: val_accuracy improved from 0.34375 to 0.39062, saving model to models\best_model_esc10_exp_0_0
Epoch 5/20
4/4 [==============================] - 6s 1s/step - loss: 1.7001 - accuracy: 0.3688 - val_loss: 1.3930 - val_accuracy: 0.5312

Epoch 00005: val_accuracy improved from 0.39062 to 0.53125, saving model to models\best_model_esc10_exp_0_0
Epoch 6/20
4/4 [==============================] - 5s 1s/step - loss: 1.5654 - accuracy: 0.3891 - val_loss: 1.2672 - val_accuracy: 0.5781

Epoch 00006: val_accuracy improved from 0.53125 to 0.57812, saving model to models\best_model_esc10_exp_0_0
Epoch 7/20
4/4 [==============================] - 5s 1s/step - loss: 1.2690 - accuracy: 0.5245 - val_loss: 1.1660 - val_accuracy: 0.5625

Epoch 00007: val_accuracy did not improve from 0.57812
Epoch 8/20
4/4 [==============================] - 5s 1s/step - loss: 1.2750 - accuracy: 0.5693 - val_loss: 1.0166 - val_accuracy: 0.6406

Epoch 00008: val_accuracy improved from 0.57812 to 0.64062, saving model to models\best_model_esc10_exp_0_0
Epoch 9/20
4/4 [==============================] - 5s 1s/step - loss: 1.0537 - accuracy: 0.6250 - val_loss: 0.9858 - val_accuracy: 0.6875

Epoch 00009: val_accuracy improved from 0.64062 to 0.68750, saving model to models\best_model_esc10_exp_0_0
Epoch 10/20
4/4 [==============================] - 5s 1s/step - loss: 1.0464 - accuracy: 0.6365 - val_loss: 0.9142 - val_accuracy: 0.6094

Epoch 00010: val_accuracy did not improve from 0.68750
Epoch 11/20
4/4 [==============================] - 6s 2s/step - loss: 0.9191 - accuracy: 0.6620 - val_loss: 0.8708 - val_accuracy: 0.6875

Epoch 00011: val_accuracy did not improve from 0.68750
Epoch 12/20
4/4 [==============================] - 6s 1s/step - loss: 0.8412 - accuracy: 0.6937 - val_loss: 0.8079 - val_accuracy: 0.7344

Epoch 00012: val_accuracy improved from 0.68750 to 0.73438, saving model to models\best_model_esc10_exp_0_0
Epoch 13/20
4/4 [==============================] - 6s 1s/step - loss: 0.7253 - accuracy: 0.7562 - val_loss: 0.7438 - val_accuracy: 0.7031

Epoch 00013: val_accuracy did not improve from 0.73438
Epoch 14/20
4/4 [==============================] - 6s 2s/step - loss: 0.6672 - accuracy: 0.7432 - val_loss: 0.6875 - val_accuracy: 0.7188

Epoch 00014: val_accuracy did not improve from 0.73438
Epoch 15/20
4/4 [==============================] - 6s 1s/step - loss: 0.5805 - accuracy: 0.7802 - val_loss: 0.6841 - val_accuracy: 0.7344

Epoch 00015: val_accuracy did not improve from 0.73438
Epoch 16/20
4/4 [==============================] - 6s 1s/step - loss: 0.6349 - accuracy: 0.7688 - val_loss: 0.6919 - val_accuracy: 0.7188

Epoch 00016: val_accuracy did not improve from 0.73438
Epoch 17/20
4/4 [==============================] - 6s 1s/step - loss: 0.5123 - accuracy: 0.8365 - val_loss: 0.7812 - val_accuracy: 0.7188

Epoch 00017: val_accuracy did not improve from 0.73438
Epoch 18/20
4/4 [==============================] - 6s 1s/step - loss: 0.5185 - accuracy: 0.8359 - val_loss: 0.6623 - val_accuracy: 0.7500

Epoch 00018: val_accuracy improved from 0.73438 to 0.75000, saving model to models\best_model_esc10_exp_0_0
Epoch 19/20
4/4 [==============================] - 6s 1s/step - loss: 0.4048 - accuracy: 0.8641 - val_loss: 0.6022 - val_accuracy: 0.7969

Epoch 00019: val_accuracy improved from 0.75000 to 0.79688, saving model to models\best_model_esc10_exp_0_0
Epoch 20/20
4/4 [==============================] - 6s 1s/step - loss: 0.4520 - accuracy: 0.8073 - val_loss: 0.7001 - val_accuracy: 0.7500

Epoch 00020: val_accuracy did not improve from 0.79688
Test accuracy:  0.800000011920929
Epoch 1/20
4/4 [==============================] - 6s 2s/step - loss: 2.2576 - accuracy: 0.1182 - val_loss: 1.9856 - val_accuracy: 0.2812

Epoch 00001: val_accuracy improved from -inf to 0.28125, saving model to models\best_model_esc10_exp_0_1
Epoch 2/20
4/4 [==============================] - 6s 2s/step - loss: 2.0006 - accuracy: 0.2943 - val_loss: 1.7071 - val_accuracy: 0.3594

Epoch 00002: val_accuracy improved from 0.28125 to 0.35938, saving model to models\best_model_esc10_exp_0_1
Epoch 3/20
4/4 [==============================] - 6s 1s/step - loss: 1.8005 - accuracy: 0.3104 - val_loss: 1.6498 - val_accuracy: 0.3438

Epoch 00003: val_accuracy did not improve from 0.35938
Epoch 4/20
4/4 [==============================] - 6s 1s/step - loss: 1.6479 - accuracy: 0.4245 - val_loss: 1.4513 - val_accuracy: 0.4375

Epoch 00004: val_accuracy improved from 0.35938 to 0.43750, saving model to models\best_model_esc10_exp_0_1
Epoch 5/20
4/4 [==============================] - 6s 2s/step - loss: 1.5713 - accuracy: 0.4266 - val_loss: 1.3534 - val_accuracy: 0.5156

Epoch 00005: val_accuracy improved from 0.43750 to 0.51562, saving model to models\best_model_esc10_exp_0_1
Epoch 6/20
4/4 [==============================] - 6s 2s/step - loss: 1.3704 - accuracy: 0.5057 - val_loss: 1.3632 - val_accuracy: 0.5781

Epoch 00006: val_accuracy improved from 0.51562 to 0.57812, saving model to models\best_model_esc10_exp_0_1
Epoch 7/20
4/4 [==============================] - 6s 1s/step - loss: 1.4360 - accuracy: 0.4505 - val_loss: 1.2497 - val_accuracy: 0.4844

Epoch 00007: val_accuracy did not improve from 0.57812
Epoch 8/20
4/4 [==============================] - 6s 1s/step - loss: 1.2729 - accuracy: 0.5370 - val_loss: 1.1119 - val_accuracy: 0.6406

Epoch 00008: val_accuracy improved from 0.57812 to 0.64062, saving model to models\best_model_esc10_exp_0_1
Epoch 9/20
4/4 [==============================] - 6s 1s/step - loss: 1.1795 - accuracy: 0.5807 - val_loss: 0.9729 - val_accuracy: 0.7188

Epoch 00009: val_accuracy improved from 0.64062 to 0.71875, saving model to models\best_model_esc10_exp_0_1
Epoch 10/20
4/4 [==============================] - 6s 2s/step - loss: 0.9716 - accuracy: 0.6807 - val_loss: 0.9171 - val_accuracy: 0.7031

Epoch 00010: val_accuracy did not improve from 0.71875
Epoch 11/20
4/4 [==============================] - 6s 1s/step - loss: 0.9123 - accuracy: 0.6573 - val_loss: 1.0000 - val_accuracy: 0.6719

Epoch 00011: val_accuracy did not improve from 0.71875
Epoch 12/20
4/4 [==============================] - 6s 1s/step - loss: 0.8691 - accuracy: 0.7177 - val_loss: 0.7704 - val_accuracy: 0.7344

Epoch 00012: val_accuracy improved from 0.71875 to 0.73438, saving model to models\best_model_esc10_exp_0_1
Epoch 13/20
4/4 [==============================] - 6s 2s/step - loss: 0.7394 - accuracy: 0.7182 - val_loss: 0.7956 - val_accuracy: 0.7344

Epoch 00013: val_accuracy did not improve from 0.73438
Epoch 14/20
4/4 [==============================] - 5s 1s/step - loss: 0.6294 - accuracy: 0.7625 - val_loss: 0.8904 - val_accuracy: 0.7031

Epoch 00014: val_accuracy did not improve from 0.73438
Epoch 15/20
4/4 [==============================] - 5s 1s/step - loss: 0.6434 - accuracy: 0.8021 - val_loss: 0.7476 - val_accuracy: 0.7656

Epoch 00015: val_accuracy improved from 0.73438 to 0.76562, saving model to models\best_model_esc10_exp_0_1
Epoch 16/20
4/4 [==============================] - 5s 1s/step - loss: 0.5598 - accuracy: 0.8042 - val_loss: 0.7524 - val_accuracy: 0.7812

Epoch 00016: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_0_1
Epoch 17/20
4/4 [==============================] - 5s 1s/step - loss: 0.5148 - accuracy: 0.7865 - val_loss: 0.6767 - val_accuracy: 0.7969

Epoch 00017: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_0_1
Epoch 18/20
4/4 [==============================] - 5s 1s/step - loss: 0.4119 - accuracy: 0.8625 - val_loss: 0.6547 - val_accuracy: 0.7344

Epoch 00018: val_accuracy did not improve from 0.79688
Epoch 19/20
4/4 [==============================] - 5s 1s/step - loss: 0.4541 - accuracy: 0.8688 - val_loss: 0.7062 - val_accuracy: 0.7500

Epoch 00019: val_accuracy did not improve from 0.79688
Epoch 20/20
4/4 [==============================] - 5s 1s/step - loss: 0.4559 - accuracy: 0.8161 - val_loss: 0.7708 - val_accuracy: 0.7188

Epoch 00020: val_accuracy did not improve from 0.79688
Test accuracy:  0.8125
Epoch 1/20
4/4 [==============================] - 6s 2s/step - loss: 2.2859 - accuracy: 0.1443 - val_loss: 2.0433 - val_accuracy: 0.2188

Epoch 00001: val_accuracy improved from -inf to 0.21875, saving model to models\best_model_esc10_exp_0_2
Epoch 2/20
4/4 [==============================] - 7s 2s/step - loss: 2.1138 - accuracy: 0.2146 - val_loss: 1.8674 - val_accuracy: 0.2656

Epoch 00002: val_accuracy improved from 0.21875 to 0.26562, saving model to models\best_model_esc10_exp_0_2
Epoch 3/20
4/4 [==============================] - 6s 2s/step - loss: 1.9353 - accuracy: 0.2573 - val_loss: 1.7383 - val_accuracy: 0.3750

Epoch 00003: val_accuracy improved from 0.26562 to 0.37500, saving model to models\best_model_esc10_exp_0_2
Epoch 4/20
4/4 [==============================] - 6s 2s/step - loss: 1.7279 - accuracy: 0.3630 - val_loss: 1.5614 - val_accuracy: 0.4219

Epoch 00004: val_accuracy improved from 0.37500 to 0.42188, saving model to models\best_model_esc10_exp_0_2
Epoch 5/20
4/4 [==============================] - 6s 1s/step - loss: 1.5758 - accuracy: 0.4464 - val_loss: 1.2971 - val_accuracy: 0.5938

Epoch 00005: val_accuracy improved from 0.42188 to 0.59375, saving model to models\best_model_esc10_exp_0_2
Epoch 6/20
4/4 [==============================] - 6s 1s/step - loss: 1.4047 - accuracy: 0.5302 - val_loss: 1.1142 - val_accuracy: 0.6406

Epoch 00006: val_accuracy improved from 0.59375 to 0.64062, saving model to models\best_model_esc10_exp_0_2
Epoch 7/20
4/4 [==============================] - 6s 2s/step - loss: 1.2652 - accuracy: 0.5531 - val_loss: 1.1942 - val_accuracy: 0.6094

Epoch 00007: val_accuracy did not improve from 0.64062
Epoch 8/20
4/4 [==============================] - 6s 1s/step - loss: 1.1203 - accuracy: 0.5849 - val_loss: 1.1145 - val_accuracy: 0.6094

Epoch 00008: val_accuracy did not improve from 0.64062
Epoch 9/20
4/4 [==============================] - 6s 1s/step - loss: 1.1066 - accuracy: 0.6557 - val_loss: 1.0495 - val_accuracy: 0.6875

Epoch 00009: val_accuracy improved from 0.64062 to 0.68750, saving model to models\best_model_esc10_exp_0_2
Epoch 10/20
4/4 [==============================] - 6s 1s/step - loss: 1.0498 - accuracy: 0.6406 - val_loss: 0.8602 - val_accuracy: 0.7188

Epoch 00010: val_accuracy improved from 0.68750 to 0.71875, saving model to models\best_model_esc10_exp_0_2
Epoch 11/20
4/4 [==============================] - 6s 1s/step - loss: 0.7963 - accuracy: 0.6911 - val_loss: 0.8925 - val_accuracy: 0.7188

Epoch 00011: val_accuracy did not improve from 0.71875
Epoch 12/20
4/4 [==============================] - 6s 2s/step - loss: 0.7746 - accuracy: 0.7260 - val_loss: 0.7712 - val_accuracy: 0.7656

Epoch 00012: val_accuracy improved from 0.71875 to 0.76562, saving model to models\best_model_esc10_exp_0_2
Epoch 13/20
4/4 [==============================] - 6s 1s/step - loss: 0.7446 - accuracy: 0.7120 - val_loss: 0.8326 - val_accuracy: 0.7344

Epoch 00013: val_accuracy did not improve from 0.76562
Epoch 14/20
4/4 [==============================] - 6s 1s/step - loss: 0.6987 - accuracy: 0.7266 - val_loss: 0.7559 - val_accuracy: 0.7812

Epoch 00014: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_0_2
Epoch 15/20
4/4 [==============================] - 6s 2s/step - loss: 0.5109 - accuracy: 0.8370 - val_loss: 0.6986 - val_accuracy: 0.8125

Epoch 00015: val_accuracy improved from 0.78125 to 0.81250, saving model to models\best_model_esc10_exp_0_2
Epoch 16/20
4/4 [==============================] - 6s 1s/step - loss: 0.6295 - accuracy: 0.7385 - val_loss: 0.7188 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
4/4 [==============================] - 6s 1s/step - loss: 0.5324 - accuracy: 0.7917 - val_loss: 0.5956 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
4/4 [==============================] - 6s 2s/step - loss: 0.4835 - accuracy: 0.8172 - val_loss: 0.6501 - val_accuracy: 0.7344

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
4/4 [==============================] - 6s 1s/step - loss: 0.4636 - accuracy: 0.8432 - val_loss: 0.7418 - val_accuracy: 0.7812

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
4/4 [==============================] - 6s 2s/step - loss: 0.4587 - accuracy: 0.8297 - val_loss: 0.5877 - val_accuracy: 0.8281

Epoch 00020: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_0_2
Test accuracy:  0.824999988079071
Epoch 1/20
4/4 [==============================] - 6s 2s/step - loss: 2.3035 - accuracy: 0.1141 - val_loss: 2.0688 - val_accuracy: 0.2188

Epoch 00001: val_accuracy improved from -inf to 0.21875, saving model to models\best_model_esc10_exp_0_3
Epoch 2/20
4/4 [==============================] - 6s 2s/step - loss: 2.1232 - accuracy: 0.1594 - val_loss: 1.8218 - val_accuracy: 0.3594

Epoch 00002: val_accuracy improved from 0.21875 to 0.35938, saving model to models\best_model_esc10_exp_0_3
Epoch 3/20
4/4 [==============================] - 6s 2s/step - loss: 1.8532 - accuracy: 0.3141 - val_loss: 1.6571 - val_accuracy: 0.3750

Epoch 00003: val_accuracy improved from 0.35938 to 0.37500, saving model to models\best_model_esc10_exp_0_3
Epoch 4/20
4/4 [==============================] - 6s 2s/step - loss: 1.7044 - accuracy: 0.3724 - val_loss: 1.4654 - val_accuracy: 0.4531

Epoch 00004: val_accuracy improved from 0.37500 to 0.45312, saving model to models\best_model_esc10_exp_0_3
Epoch 5/20
4/4 [==============================] - 6s 2s/step - loss: 1.6027 - accuracy: 0.3714 - val_loss: 1.2971 - val_accuracy: 0.5781

Epoch 00005: val_accuracy improved from 0.45312 to 0.57812, saving model to models\best_model_esc10_exp_0_3
Epoch 6/20
4/4 [==============================] - 6s 2s/step - loss: 1.3671 - accuracy: 0.4990 - val_loss: 1.1772 - val_accuracy: 0.6250

Epoch 00006: val_accuracy improved from 0.57812 to 0.62500, saving model to models\best_model_esc10_exp_0_3
Epoch 7/20
4/4 [==============================] - 6s 2s/step - loss: 1.2640 - accuracy: 0.5146 - val_loss: 1.0766 - val_accuracy: 0.5938

Epoch 00007: val_accuracy did not improve from 0.62500
Epoch 8/20
4/4 [==============================] - 6s 2s/step - loss: 1.1717 - accuracy: 0.6031 - val_loss: 0.9218 - val_accuracy: 0.5938

Epoch 00008: val_accuracy did not improve from 0.62500
Epoch 9/20
4/4 [==============================] - 6s 1s/step - loss: 1.0652 - accuracy: 0.6214 - val_loss: 0.9444 - val_accuracy: 0.6406

Epoch 00009: val_accuracy improved from 0.62500 to 0.64062, saving model to models\best_model_esc10_exp_0_3
Epoch 10/20
4/4 [==============================] - 6s 1s/step - loss: 0.9670 - accuracy: 0.6151 - val_loss: 0.8778 - val_accuracy: 0.6562

Epoch 00010: val_accuracy improved from 0.64062 to 0.65625, saving model to models\best_model_esc10_exp_0_3
Epoch 11/20
4/4 [==============================] - 6s 1s/step - loss: 0.8124 - accuracy: 0.6859 - val_loss: 0.7888 - val_accuracy: 0.7031

Epoch 00011: val_accuracy improved from 0.65625 to 0.70312, saving model to models\best_model_esc10_exp_0_3
Epoch 12/20
4/4 [==============================] - 5s 1s/step - loss: 0.7180 - accuracy: 0.7047 - val_loss: 0.7497 - val_accuracy: 0.7969

Epoch 00012: val_accuracy improved from 0.70312 to 0.79688, saving model to models\best_model_esc10_exp_0_3
Epoch 13/20
4/4 [==============================] - 5s 1s/step - loss: 0.6631 - accuracy: 0.7557 - val_loss: 0.7628 - val_accuracy: 0.7500

Epoch 00013: val_accuracy did not improve from 0.79688
Epoch 14/20
4/4 [==============================] - 5s 1s/step - loss: 0.6083 - accuracy: 0.7641 - val_loss: 0.6682 - val_accuracy: 0.7500

Epoch 00014: val_accuracy did not improve from 0.79688
Epoch 15/20
4/4 [==============================] - 5s 1s/step - loss: 0.5957 - accuracy: 0.7521 - val_loss: 0.8532 - val_accuracy: 0.6875

Epoch 00015: val_accuracy did not improve from 0.79688
Epoch 16/20
4/4 [==============================] - 6s 2s/step - loss: 0.5447 - accuracy: 0.7885 - val_loss: 0.6784 - val_accuracy: 0.7656

Epoch 00016: val_accuracy did not improve from 0.79688
Epoch 17/20
4/4 [==============================] - 7s 2s/step - loss: 0.3971 - accuracy: 0.8526 - val_loss: 0.6770 - val_accuracy: 0.7969

Epoch 00017: val_accuracy did not improve from 0.79688
Epoch 18/20
4/4 [==============================] - 7s 2s/step - loss: 0.4331 - accuracy: 0.8302 - val_loss: 0.6012 - val_accuracy: 0.8125

Epoch 00018: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_0_3
Epoch 19/20
4/4 [==============================] - 7s 2s/step - loss: 0.3778 - accuracy: 0.8635 - val_loss: 0.4577 - val_accuracy: 0.8438

Epoch 00019: val_accuracy improved from 0.81250 to 0.84375, saving model to models\best_model_esc10_exp_0_3
Epoch 20/20
4/4 [==============================] - 6s 2s/step - loss: 0.3989 - accuracy: 0.8844 - val_loss: 0.6132 - val_accuracy: 0.7969

Epoch 00020: val_accuracy did not improve from 0.84375
Test accuracy:  0.8125
Epoch 1/20
4/4 [==============================] - 9s 2s/step - loss: 2.2917 - accuracy: 0.1297 - val_loss: 2.0868 - val_accuracy: 0.1406

Epoch 00001: val_accuracy improved from -inf to 0.14062, saving model to models\best_model_esc10_exp_0_4
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.0438 - accuracy: 0.2193 - val_loss: 1.7637 - val_accuracy: 0.3281

Epoch 00002: val_accuracy improved from 0.14062 to 0.32812, saving model to models\best_model_esc10_exp_0_4
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.8328 - accuracy: 0.3026 - val_loss: 1.6257 - val_accuracy: 0.4062

Epoch 00003: val_accuracy improved from 0.32812 to 0.40625, saving model to models\best_model_esc10_exp_0_4
Epoch 4/20
4/4 [==============================] - 8s 2s/step - loss: 1.7632 - accuracy: 0.3906 - val_loss: 1.5879 - val_accuracy: 0.4531

Epoch 00004: val_accuracy improved from 0.40625 to 0.45312, saving model to models\best_model_esc10_exp_0_4
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.5930 - accuracy: 0.4198 - val_loss: 1.4353 - val_accuracy: 0.4688

Epoch 00005: val_accuracy improved from 0.45312 to 0.46875, saving model to models\best_model_esc10_exp_0_4
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.4203 - accuracy: 0.4745 - val_loss: 1.2546 - val_accuracy: 0.6250

Epoch 00006: val_accuracy improved from 0.46875 to 0.62500, saving model to models\best_model_esc10_exp_0_4
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.2475 - accuracy: 0.5604 - val_loss: 1.0885 - val_accuracy: 0.6250

Epoch 00007: val_accuracy did not improve from 0.62500
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.0691 - accuracy: 0.5828 - val_loss: 1.0633 - val_accuracy: 0.5938

Epoch 00008: val_accuracy did not improve from 0.62500
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 1.0604 - accuracy: 0.6021 - val_loss: 0.9602 - val_accuracy: 0.6094

Epoch 00009: val_accuracy did not improve from 0.62500
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 0.9847 - accuracy: 0.6474 - val_loss: 0.9096 - val_accuracy: 0.6719

Epoch 00010: val_accuracy improved from 0.62500 to 0.67188, saving model to models\best_model_esc10_exp_0_4
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.8145 - accuracy: 0.7130 - val_loss: 0.8075 - val_accuracy: 0.7031

Epoch 00011: val_accuracy improved from 0.67188 to 0.70312, saving model to models\best_model_esc10_exp_0_4
Epoch 12/20
4/4 [==============================] - 7s 2s/step - loss: 0.8269 - accuracy: 0.7016 - val_loss: 0.7516 - val_accuracy: 0.6719

Epoch 00012: val_accuracy did not improve from 0.70312
Epoch 13/20
4/4 [==============================] - 7s 2s/step - loss: 0.6449 - accuracy: 0.7578 - val_loss: 0.9433 - val_accuracy: 0.6406

Epoch 00013: val_accuracy did not improve from 0.70312
Epoch 14/20
4/4 [==============================] - 7s 2s/step - loss: 0.6241 - accuracy: 0.7969 - val_loss: 0.7721 - val_accuracy: 0.7500

Epoch 00014: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_0_4
Epoch 15/20
4/4 [==============================] - 8s 2s/step - loss: 0.5479 - accuracy: 0.7865 - val_loss: 0.8350 - val_accuracy: 0.6875

Epoch 00015: val_accuracy did not improve from 0.75000
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.5409 - accuracy: 0.8052 - val_loss: 0.6631 - val_accuracy: 0.7188

Epoch 00016: val_accuracy did not improve from 0.75000
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.4656 - accuracy: 0.8286 - val_loss: 0.7057 - val_accuracy: 0.7344

Epoch 00017: val_accuracy did not improve from 0.75000
Epoch 18/20
4/4 [==============================] - 8s 2s/step - loss: 0.4021 - accuracy: 0.8635 - val_loss: 0.6352 - val_accuracy: 0.7656

Epoch 00018: val_accuracy improved from 0.75000 to 0.76562, saving model to models\best_model_esc10_exp_0_4
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.4047 - accuracy: 0.8193 - val_loss: 0.7225 - val_accuracy: 0.7969

Epoch 00019: val_accuracy improved from 0.76562 to 0.79688, saving model to models\best_model_esc10_exp_0_4
Epoch 20/20
4/4 [==============================] - 8s 2s/step - loss: 0.3868 - accuracy: 0.8484 - val_loss: 0.6314 - val_accuracy: 0.7969

Epoch 00020: val_accuracy did not improve from 0.79688
Test accuracy:  0.7250000238418579
Epoch 1/20
4/4 [==============================] - 9s 2s/step - loss: 2.3122 - accuracy: 0.1198 - val_loss: 1.9667 - val_accuracy: 0.3594

Epoch 00001: val_accuracy improved from -inf to 0.35938, saving model to models\best_model_esc10_exp_0_5
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.0146 - accuracy: 0.2750 - val_loss: 1.7587 - val_accuracy: 0.3750

Epoch 00002: val_accuracy improved from 0.35938 to 0.37500, saving model to models\best_model_esc10_exp_0_5
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.8696 - accuracy: 0.2557 - val_loss: 1.4725 - val_accuracy: 0.4062

Epoch 00003: val_accuracy improved from 0.37500 to 0.40625, saving model to models\best_model_esc10_exp_0_5
Epoch 4/20
4/4 [==============================] - 8s 2s/step - loss: 1.6363 - accuracy: 0.3542 - val_loss: 1.3304 - val_accuracy: 0.5469

Epoch 00004: val_accuracy improved from 0.40625 to 0.54688, saving model to models\best_model_esc10_exp_0_5
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.5161 - accuracy: 0.4536 - val_loss: 1.1699 - val_accuracy: 0.5938

Epoch 00005: val_accuracy improved from 0.54688 to 0.59375, saving model to models\best_model_esc10_exp_0_5
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.3001 - accuracy: 0.5875 - val_loss: 1.1316 - val_accuracy: 0.6094

Epoch 00006: val_accuracy improved from 0.59375 to 0.60938, saving model to models\best_model_esc10_exp_0_5
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.3164 - accuracy: 0.5359 - val_loss: 0.8756 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.60938 to 0.70312, saving model to models\best_model_esc10_exp_0_5
Epoch 8/20
4/4 [==============================] - 7s 2s/step - loss: 0.9745 - accuracy: 0.6776 - val_loss: 0.8964 - val_accuracy: 0.7188

Epoch 00008: val_accuracy improved from 0.70312 to 0.71875, saving model to models\best_model_esc10_exp_0_5
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 0.9163 - accuracy: 0.6562 - val_loss: 0.7733 - val_accuracy: 0.7344

Epoch 00009: val_accuracy improved from 0.71875 to 0.73438, saving model to models\best_model_esc10_exp_0_5
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 0.7747 - accuracy: 0.7208 - val_loss: 0.7877 - val_accuracy: 0.6875

Epoch 00010: val_accuracy did not improve from 0.73438
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.6917 - accuracy: 0.7531 - val_loss: 0.6776 - val_accuracy: 0.7500

Epoch 00011: val_accuracy improved from 0.73438 to 0.75000, saving model to models\best_model_esc10_exp_0_5
Epoch 12/20
4/4 [==============================] - 8s 2s/step - loss: 0.7226 - accuracy: 0.7307 - val_loss: 0.7905 - val_accuracy: 0.7031

Epoch 00012: val_accuracy did not improve from 0.75000
Epoch 13/20
4/4 [==============================] - 8s 2s/step - loss: 0.6113 - accuracy: 0.8010 - val_loss: 0.6564 - val_accuracy: 0.8125

Epoch 00013: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_0_5
Epoch 14/20
4/4 [==============================] - 8s 2s/step - loss: 0.6758 - accuracy: 0.7677 - val_loss: 0.6826 - val_accuracy: 0.7500

Epoch 00014: val_accuracy did not improve from 0.81250
Epoch 15/20
4/4 [==============================] - 9s 2s/step - loss: 0.5134 - accuracy: 0.8130 - val_loss: 0.6650 - val_accuracy: 0.7969

Epoch 00015: val_accuracy did not improve from 0.81250
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.5642 - accuracy: 0.7880 - val_loss: 0.7956 - val_accuracy: 0.7500

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.6309 - accuracy: 0.7927 - val_loss: 0.8493 - val_accuracy: 0.7969

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
4/4 [==============================] - 8s 2s/step - loss: 0.4239 - accuracy: 0.8505 - val_loss: 0.6862 - val_accuracy: 0.8125

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.4298 - accuracy: 0.8604 - val_loss: 0.6757 - val_accuracy: 0.7812

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
4/4 [==============================] - 7s 2s/step - loss: 0.3794 - accuracy: 0.8620 - val_loss: 0.6667 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.81250
Test accuracy:  0.7875000238418579
Epoch 1/20
4/4 [==============================] - 8s 2s/step - loss: 2.3151 - accuracy: 0.0802 - val_loss: 2.0733 - val_accuracy: 0.2500

Epoch 00001: val_accuracy improved from -inf to 0.25000, saving model to models\best_model_esc10_exp_0_6
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.1728 - accuracy: 0.2203 - val_loss: 1.8374 - val_accuracy: 0.3281

Epoch 00002: val_accuracy improved from 0.25000 to 0.32812, saving model to models\best_model_esc10_exp_0_6
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.8916 - accuracy: 0.2818 - val_loss: 1.6416 - val_accuracy: 0.3906

Epoch 00003: val_accuracy improved from 0.32812 to 0.39062, saving model to models\best_model_esc10_exp_0_6
Epoch 4/20
4/4 [==============================] - 8s 2s/step - loss: 1.7133 - accuracy: 0.3625 - val_loss: 1.4366 - val_accuracy: 0.4375

Epoch 00004: val_accuracy improved from 0.39062 to 0.43750, saving model to models\best_model_esc10_exp_0_6
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.5308 - accuracy: 0.4615 - val_loss: 1.2712 - val_accuracy: 0.6250

Epoch 00005: val_accuracy improved from 0.43750 to 0.62500, saving model to models\best_model_esc10_exp_0_6
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.4296 - accuracy: 0.4630 - val_loss: 1.2279 - val_accuracy: 0.5938

Epoch 00006: val_accuracy did not improve from 0.62500
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.2712 - accuracy: 0.5500 - val_loss: 1.1559 - val_accuracy: 0.6094

Epoch 00007: val_accuracy did not improve from 0.62500
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.0292 - accuracy: 0.6432 - val_loss: 1.0197 - val_accuracy: 0.5781

Epoch 00008: val_accuracy did not improve from 0.62500
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 1.0504 - accuracy: 0.6229 - val_loss: 0.9888 - val_accuracy: 0.5781

Epoch 00009: val_accuracy did not improve from 0.62500
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 0.8743 - accuracy: 0.7031 - val_loss: 1.0461 - val_accuracy: 0.6406

Epoch 00010: val_accuracy improved from 0.62500 to 0.64062, saving model to models\best_model_esc10_exp_0_6
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.8881 - accuracy: 0.6797 - val_loss: 1.0007 - val_accuracy: 0.6875

Epoch 00011: val_accuracy improved from 0.64062 to 0.68750, saving model to models\best_model_esc10_exp_0_6
Epoch 12/20
4/4 [==============================] - 8s 2s/step - loss: 0.7714 - accuracy: 0.7104 - val_loss: 0.9521 - val_accuracy: 0.6250

Epoch 00012: val_accuracy did not improve from 0.68750
Epoch 13/20
4/4 [==============================] - 8s 2s/step - loss: 0.7737 - accuracy: 0.7161 - val_loss: 0.8000 - val_accuracy: 0.7031

Epoch 00013: val_accuracy improved from 0.68750 to 0.70312, saving model to models\best_model_esc10_exp_0_6
Epoch 14/20
4/4 [==============================] - 8s 2s/step - loss: 0.6754 - accuracy: 0.7568 - val_loss: 0.7523 - val_accuracy: 0.7656

Epoch 00014: val_accuracy improved from 0.70312 to 0.76562, saving model to models\best_model_esc10_exp_0_6
Epoch 15/20
4/4 [==============================] - 8s 2s/step - loss: 0.6057 - accuracy: 0.7833 - val_loss: 0.9006 - val_accuracy: 0.7500

Epoch 00015: val_accuracy did not improve from 0.76562
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.6086 - accuracy: 0.7854 - val_loss: 0.8928 - val_accuracy: 0.7500

Epoch 00016: val_accuracy did not improve from 0.76562
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.5854 - accuracy: 0.7807 - val_loss: 0.8018 - val_accuracy: 0.7031

Epoch 00017: val_accuracy did not improve from 0.76562
Epoch 18/20
4/4 [==============================] - 7s 2s/step - loss: 0.5388 - accuracy: 0.8193 - val_loss: 0.7168 - val_accuracy: 0.7969

Epoch 00018: val_accuracy improved from 0.76562 to 0.79688, saving model to models\best_model_esc10_exp_0_6
Epoch 19/20
4/4 [==============================] - 7s 2s/step - loss: 0.4130 - accuracy: 0.8625 - val_loss: 0.8113 - val_accuracy: 0.7344

Epoch 00019: val_accuracy did not improve from 0.79688
Epoch 20/20
4/4 [==============================] - 7s 2s/step - loss: 0.3857 - accuracy: 0.8995 - val_loss: 0.7825 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.79688
Test accuracy:  0.7250000238418579
Epoch 1/20
4/4 [==============================] - 8s 2s/step - loss: 2.3067 - accuracy: 0.1203 - val_loss: 1.9509 - val_accuracy: 0.2344

Epoch 00001: val_accuracy improved from -inf to 0.23438, saving model to models\best_model_esc10_exp_0_7
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.0001 - accuracy: 0.2448 - val_loss: 1.6773 - val_accuracy: 0.3750

Epoch 00002: val_accuracy improved from 0.23438 to 0.37500, saving model to models\best_model_esc10_exp_0_7
Epoch 3/20
4/4 [==============================] - 9s 2s/step - loss: 1.8028 - accuracy: 0.3422 - val_loss: 1.5055 - val_accuracy: 0.5469

Epoch 00003: val_accuracy improved from 0.37500 to 0.54688, saving model to models\best_model_esc10_exp_0_7
Epoch 4/20
4/4 [==============================] - 9s 2s/step - loss: 1.6232 - accuracy: 0.3828 - val_loss: 1.3191 - val_accuracy: 0.5625

Epoch 00004: val_accuracy improved from 0.54688 to 0.56250, saving model to models\best_model_esc10_exp_0_7
Epoch 5/20
4/4 [==============================] - 9s 2s/step - loss: 1.5311 - accuracy: 0.4328 - val_loss: 1.1433 - val_accuracy: 0.6250

Epoch 00005: val_accuracy improved from 0.56250 to 0.62500, saving model to models\best_model_esc10_exp_0_7
Epoch 6/20
4/4 [==============================] - 9s 2s/step - loss: 1.3725 - accuracy: 0.5323 - val_loss: 1.1556 - val_accuracy: 0.6094

Epoch 00006: val_accuracy did not improve from 0.62500
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.2036 - accuracy: 0.5552 - val_loss: 1.0525 - val_accuracy: 0.6406

Epoch 00007: val_accuracy improved from 0.62500 to 0.64062, saving model to models\best_model_esc10_exp_0_7
Epoch 8/20
4/4 [==============================] - 9s 2s/step - loss: 1.1145 - accuracy: 0.5818 - val_loss: 1.0284 - val_accuracy: 0.6562

Epoch 00008: val_accuracy improved from 0.64062 to 0.65625, saving model to models\best_model_esc10_exp_0_7
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 1.0250 - accuracy: 0.6365 - val_loss: 1.0351 - val_accuracy: 0.7188

Epoch 00009: val_accuracy improved from 0.65625 to 0.71875, saving model to models\best_model_esc10_exp_0_7
Epoch 10/20
4/4 [==============================] - 9s 2s/step - loss: 0.9128 - accuracy: 0.6786 - val_loss: 0.8660 - val_accuracy: 0.7344

Epoch 00010: val_accuracy improved from 0.71875 to 0.73438, saving model to models\best_model_esc10_exp_0_7
Epoch 11/20
4/4 [==============================] - 9s 2s/step - loss: 0.8145 - accuracy: 0.7292 - val_loss: 0.7928 - val_accuracy: 0.7188

Epoch 00011: val_accuracy did not improve from 0.73438
Epoch 12/20
4/4 [==============================] - 9s 2s/step - loss: 0.6685 - accuracy: 0.7615 - val_loss: 0.7889 - val_accuracy: 0.7656

Epoch 00012: val_accuracy improved from 0.73438 to 0.76562, saving model to models\best_model_esc10_exp_0_7
Epoch 13/20
4/4 [==============================] - 8s 2s/step - loss: 0.6880 - accuracy: 0.7583 - val_loss: 0.7045 - val_accuracy: 0.7500

Epoch 00013: val_accuracy did not improve from 0.76562
Epoch 14/20
4/4 [==============================] - 9s 2s/step - loss: 0.6630 - accuracy: 0.7604 - val_loss: 0.7334 - val_accuracy: 0.7812

Epoch 00014: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_0_7
Epoch 15/20
4/4 [==============================] - 8s 2s/step - loss: 0.6144 - accuracy: 0.8021 - val_loss: 0.6673 - val_accuracy: 0.7656

Epoch 00015: val_accuracy did not improve from 0.78125
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.4869 - accuracy: 0.8073 - val_loss: 0.6632 - val_accuracy: 0.7812

Epoch 00016: val_accuracy did not improve from 0.78125
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.4953 - accuracy: 0.8380 - val_loss: 0.6736 - val_accuracy: 0.8281

Epoch 00017: val_accuracy improved from 0.78125 to 0.82812, saving model to models\best_model_esc10_exp_0_7
Epoch 18/20
4/4 [==============================] - 8s 2s/step - loss: 0.4971 - accuracy: 0.8677 - val_loss: 0.6339 - val_accuracy: 0.8125

Epoch 00018: val_accuracy did not improve from 0.82812
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.4136 - accuracy: 0.8599 - val_loss: 0.6389 - val_accuracy: 0.8125

Epoch 00019: val_accuracy did not improve from 0.82812
Epoch 20/20
4/4 [==============================] - 8s 2s/step - loss: 0.3981 - accuracy: 0.8635 - val_loss: 0.5227 - val_accuracy: 0.8594

Epoch 00020: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_0_7
Test accuracy:  0.800000011920929
Epoch 1/20
4/4 [==============================] - 8s 2s/step - loss: 2.2925 - accuracy: 0.1182 - val_loss: 1.9591 - val_accuracy: 0.2812

Epoch 00001: val_accuracy improved from -inf to 0.28125, saving model to models\best_model_esc10_exp_0_8
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.0783 - accuracy: 0.2052 - val_loss: 1.6811 - val_accuracy: 0.3438

Epoch 00002: val_accuracy improved from 0.28125 to 0.34375, saving model to models\best_model_esc10_exp_0_8
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.7411 - accuracy: 0.3245 - val_loss: 1.4704 - val_accuracy: 0.4844

Epoch 00003: val_accuracy improved from 0.34375 to 0.48438, saving model to models\best_model_esc10_exp_0_8
Epoch 4/20
4/4 [==============================] - 9s 2s/step - loss: 1.6575 - accuracy: 0.3708 - val_loss: 1.3493 - val_accuracy: 0.4219

Epoch 00004: val_accuracy did not improve from 0.48438
Epoch 5/20
4/4 [==============================] - 9s 2s/step - loss: 1.4909 - accuracy: 0.4469 - val_loss: 1.2376 - val_accuracy: 0.5000

Epoch 00005: val_accuracy improved from 0.48438 to 0.50000, saving model to models\best_model_esc10_exp_0_8
Epoch 6/20
4/4 [==============================] - 9s 2s/step - loss: 1.2649 - accuracy: 0.5635 - val_loss: 1.0259 - val_accuracy: 0.6875

Epoch 00006: val_accuracy improved from 0.50000 to 0.68750, saving model to models\best_model_esc10_exp_0_8
Epoch 7/20
4/4 [==============================] - 9s 2s/step - loss: 1.1180 - accuracy: 0.6073 - val_loss: 0.9954 - val_accuracy: 0.6719

Epoch 00007: val_accuracy did not improve from 0.68750
Epoch 8/20
4/4 [==============================] - 9s 2s/step - loss: 1.1247 - accuracy: 0.6151 - val_loss: 0.9554 - val_accuracy: 0.6406

Epoch 00008: val_accuracy did not improve from 0.68750
Epoch 9/20
4/4 [==============================] - 9s 2s/step - loss: 0.9547 - accuracy: 0.6927 - val_loss: 0.8586 - val_accuracy: 0.6875

Epoch 00009: val_accuracy did not improve from 0.68750
Epoch 10/20
4/4 [==============================] - 9s 2s/step - loss: 0.7953 - accuracy: 0.7276 - val_loss: 0.7758 - val_accuracy: 0.7500

Epoch 00010: val_accuracy improved from 0.68750 to 0.75000, saving model to models\best_model_esc10_exp_0_8
Epoch 11/20
4/4 [==============================] - 9s 2s/step - loss: 0.8185 - accuracy: 0.6734 - val_loss: 0.8563 - val_accuracy: 0.7031

Epoch 00011: val_accuracy did not improve from 0.75000
Epoch 12/20
4/4 [==============================] - 9s 2s/step - loss: 0.7887 - accuracy: 0.7036 - val_loss: 0.8050 - val_accuracy: 0.6875

Epoch 00012: val_accuracy did not improve from 0.75000
Epoch 13/20
4/4 [==============================] - 9s 2s/step - loss: 0.7287 - accuracy: 0.7302 - val_loss: 0.7242 - val_accuracy: 0.7188

Epoch 00013: val_accuracy did not improve from 0.75000
Epoch 14/20
4/4 [==============================] - 9s 2s/step - loss: 0.6188 - accuracy: 0.7661 - val_loss: 0.6511 - val_accuracy: 0.8125

Epoch 00014: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_0_8
Epoch 15/20
4/4 [==============================] - 8s 2s/step - loss: 0.6324 - accuracy: 0.8057 - val_loss: 0.6571 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.81250
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.5426 - accuracy: 0.8099 - val_loss: 0.7719 - val_accuracy: 0.7188

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.6004 - accuracy: 0.7859 - val_loss: 0.8367 - val_accuracy: 0.7812

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
4/4 [==============================] - 8s 2s/step - loss: 0.4721 - accuracy: 0.8099 - val_loss: 0.5702 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.4149 - accuracy: 0.8766 - val_loss: 0.6327 - val_accuracy: 0.7812

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
4/4 [==============================] - 8s 2s/step - loss: 0.3786 - accuracy: 0.8719 - val_loss: 0.8976 - val_accuracy: 0.7344

Epoch 00020: val_accuracy did not improve from 0.81250
Test accuracy:  0.699999988079071
Epoch 1/20
4/4 [==============================] - 9s 2s/step - loss: 2.2582 - accuracy: 0.1167 - val_loss: 1.8072 - val_accuracy: 0.3594

Epoch 00001: val_accuracy improved from -inf to 0.35938, saving model to models\best_model_esc10_exp_0_9
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 1.9849 - accuracy: 0.2552 - val_loss: 1.7662 - val_accuracy: 0.3750

Epoch 00002: val_accuracy improved from 0.35938 to 0.37500, saving model to models\best_model_esc10_exp_0_9
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.7963 - accuracy: 0.3234 - val_loss: 1.4645 - val_accuracy: 0.5156

Epoch 00003: val_accuracy improved from 0.37500 to 0.51562, saving model to models\best_model_esc10_exp_0_9
Epoch 4/20
4/4 [==============================] - 8s 2s/step - loss: 1.6252 - accuracy: 0.3854 - val_loss: 1.3373 - val_accuracy: 0.5469

Epoch 00004: val_accuracy improved from 0.51562 to 0.54688, saving model to models\best_model_esc10_exp_0_9
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.4188 - accuracy: 0.4703 - val_loss: 1.2769 - val_accuracy: 0.6250

Epoch 00005: val_accuracy improved from 0.54688 to 0.62500, saving model to models\best_model_esc10_exp_0_9
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.3866 - accuracy: 0.5016 - val_loss: 1.1300 - val_accuracy: 0.6250

Epoch 00006: val_accuracy did not improve from 0.62500
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.1754 - accuracy: 0.6260 - val_loss: 0.9815 - val_accuracy: 0.6562

Epoch 00007: val_accuracy improved from 0.62500 to 0.65625, saving model to models\best_model_esc10_exp_0_9
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.0308 - accuracy: 0.6333 - val_loss: 0.9743 - val_accuracy: 0.6406

Epoch 00008: val_accuracy did not improve from 0.65625
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 0.8885 - accuracy: 0.6505 - val_loss: 0.8268 - val_accuracy: 0.6875

Epoch 00009: val_accuracy improved from 0.65625 to 0.68750, saving model to models\best_model_esc10_exp_0_9
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 0.8474 - accuracy: 0.7156 - val_loss: 0.8949 - val_accuracy: 0.7031

Epoch 00010: val_accuracy improved from 0.68750 to 0.70312, saving model to models\best_model_esc10_exp_0_9
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.8570 - accuracy: 0.6703 - val_loss: 0.7352 - val_accuracy: 0.7656

Epoch 00011: val_accuracy improved from 0.70312 to 0.76562, saving model to models\best_model_esc10_exp_0_9
Epoch 12/20
4/4 [==============================] - 8s 2s/step - loss: 0.7467 - accuracy: 0.7385 - val_loss: 0.8043 - val_accuracy: 0.7344

Epoch 00012: val_accuracy did not improve from 0.76562
Epoch 13/20
4/4 [==============================] - 8s 2s/step - loss: 0.7026 - accuracy: 0.7667 - val_loss: 0.6985 - val_accuracy: 0.7656

Epoch 00013: val_accuracy did not improve from 0.76562
Epoch 14/20
4/4 [==============================] - 7s 2s/step - loss: 0.5397 - accuracy: 0.7937 - val_loss: 0.6642 - val_accuracy: 0.7344

Epoch 00014: val_accuracy did not improve from 0.76562
Epoch 15/20
4/4 [==============================] - 7s 2s/step - loss: 0.5978 - accuracy: 0.7500 - val_loss: 0.6343 - val_accuracy: 0.7812

Epoch 00015: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_0_9
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.5929 - accuracy: 0.7953 - val_loss: 0.6321 - val_accuracy: 0.8281

Epoch 00016: val_accuracy improved from 0.78125 to 0.82812, saving model to models\best_model_esc10_exp_0_9
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.4831 - accuracy: 0.8115 - val_loss: 0.7151 - val_accuracy: 0.7969

Epoch 00017: val_accuracy did not improve from 0.82812
Epoch 18/20
4/4 [==============================] - 8s 2s/step - loss: 0.4179 - accuracy: 0.8375 - val_loss: 0.6343 - val_accuracy: 0.8125

Epoch 00018: val_accuracy did not improve from 0.82812
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.3864 - accuracy: 0.8547 - val_loss: 0.5308 - val_accuracy: 0.8594

Epoch 00019: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_0_9
Epoch 20/20
4/4 [==============================] - 8s 2s/step - loss: 0.4112 - accuracy: 0.8677 - val_loss: 0.6061 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.85938
Test accuracy:  0.824999988079071
Epoch 1/20
4/4 [==============================] - 8s 2s/step - loss: 2.2985 - accuracy: 0.1391 - val_loss: 2.0390 - val_accuracy: 0.1406

Epoch 00001: val_accuracy improved from -inf to 0.14062, saving model to models\best_model_esc10_exp_0_10
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.0067 - accuracy: 0.2531 - val_loss: 1.7610 - val_accuracy: 0.3438

Epoch 00002: val_accuracy improved from 0.14062 to 0.34375, saving model to models\best_model_esc10_exp_0_10
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.7737 - accuracy: 0.3604 - val_loss: 1.5245 - val_accuracy: 0.3750

Epoch 00003: val_accuracy improved from 0.34375 to 0.37500, saving model to models\best_model_esc10_exp_0_10
Epoch 4/20
4/4 [==============================] - 8s 2s/step - loss: 1.5839 - accuracy: 0.4156 - val_loss: 1.3760 - val_accuracy: 0.4844

Epoch 00004: val_accuracy improved from 0.37500 to 0.48438, saving model to models\best_model_esc10_exp_0_10
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.4644 - accuracy: 0.4458 - val_loss: 1.2849 - val_accuracy: 0.4844

Epoch 00005: val_accuracy did not improve from 0.48438
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.2920 - accuracy: 0.5693 - val_loss: 1.1781 - val_accuracy: 0.5625

Epoch 00006: val_accuracy improved from 0.48438 to 0.56250, saving model to models\best_model_esc10_exp_0_10
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.2405 - accuracy: 0.5792 - val_loss: 1.1471 - val_accuracy: 0.5938

Epoch 00007: val_accuracy improved from 0.56250 to 0.59375, saving model to models\best_model_esc10_exp_0_10
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.1563 - accuracy: 0.5896 - val_loss: 1.1453 - val_accuracy: 0.5781

Epoch 00008: val_accuracy did not improve from 0.59375
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 0.9625 - accuracy: 0.6505 - val_loss: 1.0032 - val_accuracy: 0.6406

Epoch 00009: val_accuracy improved from 0.59375 to 0.64062, saving model to models\best_model_esc10_exp_0_10
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 1.0870 - accuracy: 0.6016 - val_loss: 0.8601 - val_accuracy: 0.6719

Epoch 00010: val_accuracy improved from 0.64062 to 0.67188, saving model to models\best_model_esc10_exp_0_10
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.8642 - accuracy: 0.6927 - val_loss: 0.9722 - val_accuracy: 0.6406

Epoch 00011: val_accuracy did not improve from 0.67188
Epoch 12/20
4/4 [==============================] - 8s 2s/step - loss: 0.8080 - accuracy: 0.7120 - val_loss: 0.8983 - val_accuracy: 0.7031

Epoch 00012: val_accuracy improved from 0.67188 to 0.70312, saving model to models\best_model_esc10_exp_0_10
Epoch 13/20
4/4 [==============================] - 7s 2s/step - loss: 0.7617 - accuracy: 0.7188 - val_loss: 0.9010 - val_accuracy: 0.6719

Epoch 00013: val_accuracy did not improve from 0.70312
Epoch 14/20
4/4 [==============================] - 7s 2s/step - loss: 0.7140 - accuracy: 0.7000 - val_loss: 0.7019 - val_accuracy: 0.6719

Epoch 00014: val_accuracy did not improve from 0.70312
Epoch 15/20
4/4 [==============================] - 8s 2s/step - loss: 0.6226 - accuracy: 0.7469 - val_loss: 0.8471 - val_accuracy: 0.7656

Epoch 00015: val_accuracy improved from 0.70312 to 0.76562, saving model to models\best_model_esc10_exp_0_10
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.5239 - accuracy: 0.7927 - val_loss: 0.7059 - val_accuracy: 0.7812

Epoch 00016: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_0_10
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.4965 - accuracy: 0.8151 - val_loss: 0.7364 - val_accuracy: 0.7969

Epoch 00017: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_0_10
Epoch 18/20
4/4 [==============================] - 8s 2s/step - loss: 0.5112 - accuracy: 0.8594 - val_loss: 0.6404 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.79688
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.4279 - accuracy: 0.8557 - val_loss: 0.7038 - val_accuracy: 0.7812

Epoch 00019: val_accuracy did not improve from 0.79688
Epoch 20/20
4/4 [==============================] - 8s 2s/step - loss: 0.4400 - accuracy: 0.8318 - val_loss: 0.8061 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.79688
Test accuracy:  0.762499988079071
Epoch 1/20
4/4 [==============================] - 9s 2s/step - loss: 2.3155 - accuracy: 0.1250 - val_loss: 2.1673 - val_accuracy: 0.1875

Epoch 00001: val_accuracy improved from -inf to 0.18750, saving model to models\best_model_esc10_exp_0_11
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.0971 - accuracy: 0.2531 - val_loss: 1.9456 - val_accuracy: 0.2031

Epoch 00002: val_accuracy improved from 0.18750 to 0.20312, saving model to models\best_model_esc10_exp_0_11
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.9982 - accuracy: 0.2745 - val_loss: 1.7325 - val_accuracy: 0.4375

Epoch 00003: val_accuracy improved from 0.20312 to 0.43750, saving model to models\best_model_esc10_exp_0_11
Epoch 4/20
4/4 [==============================] - 8s 2s/step - loss: 1.7742 - accuracy: 0.3557 - val_loss: 1.5713 - val_accuracy: 0.4375

Epoch 00004: val_accuracy did not improve from 0.43750
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.5568 - accuracy: 0.3854 - val_loss: 1.3950 - val_accuracy: 0.5000

Epoch 00005: val_accuracy improved from 0.43750 to 0.50000, saving model to models\best_model_esc10_exp_0_11
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.5795 - accuracy: 0.4281 - val_loss: 1.3280 - val_accuracy: 0.5469

Epoch 00006: val_accuracy improved from 0.50000 to 0.54688, saving model to models\best_model_esc10_exp_0_11
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.3805 - accuracy: 0.5167 - val_loss: 1.3208 - val_accuracy: 0.5938

Epoch 00007: val_accuracy improved from 0.54688 to 0.59375, saving model to models\best_model_esc10_exp_0_11
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.2985 - accuracy: 0.5589 - val_loss: 1.1335 - val_accuracy: 0.6250

Epoch 00008: val_accuracy improved from 0.59375 to 0.62500, saving model to models\best_model_esc10_exp_0_11
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 1.0922 - accuracy: 0.6208 - val_loss: 1.1189 - val_accuracy: 0.6094

Epoch 00009: val_accuracy did not improve from 0.62500
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 1.0486 - accuracy: 0.6464 - val_loss: 1.0472 - val_accuracy: 0.6094

Epoch 00010: val_accuracy did not improve from 0.62500
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.8814 - accuracy: 0.6474 - val_loss: 0.9001 - val_accuracy: 0.7031

Epoch 00011: val_accuracy improved from 0.62500 to 0.70312, saving model to models\best_model_esc10_exp_0_11
Epoch 12/20
4/4 [==============================] - 7s 2s/step - loss: 0.7569 - accuracy: 0.7307 - val_loss: 0.8704 - val_accuracy: 0.6875

Epoch 00012: val_accuracy did not improve from 0.70312
Epoch 13/20
4/4 [==============================] - 7s 2s/step - loss: 0.8394 - accuracy: 0.6755 - val_loss: 0.8355 - val_accuracy: 0.7812

Epoch 00013: val_accuracy improved from 0.70312 to 0.78125, saving model to models\best_model_esc10_exp_0_11
Epoch 14/20
4/4 [==============================] - 7s 2s/step - loss: 0.8495 - accuracy: 0.6818 - val_loss: 0.8156 - val_accuracy: 0.7500

Epoch 00014: val_accuracy did not improve from 0.78125
Epoch 15/20
4/4 [==============================] - 8s 2s/step - loss: 0.7250 - accuracy: 0.7526 - val_loss: 0.7034 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.78125
Epoch 16/20
4/4 [==============================] - 8s 2s/step - loss: 0.6368 - accuracy: 0.7562 - val_loss: 0.7335 - val_accuracy: 0.7656

Epoch 00016: val_accuracy did not improve from 0.78125
Epoch 17/20
4/4 [==============================] - 8s 2s/step - loss: 0.5612 - accuracy: 0.8099 - val_loss: 0.7836 - val_accuracy: 0.7656

Epoch 00017: val_accuracy did not improve from 0.78125
Epoch 18/20
4/4 [==============================] - 8s 2s/step - loss: 0.5517 - accuracy: 0.8141 - val_loss: 0.7599 - val_accuracy: 0.7656

Epoch 00018: val_accuracy did not improve from 0.78125
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.5076 - accuracy: 0.8161 - val_loss: 0.7668 - val_accuracy: 0.7656

Epoch 00019: val_accuracy did not improve from 0.78125
Epoch 20/20
4/4 [==============================] - 8s 2s/step - loss: 0.5067 - accuracy: 0.8391 - val_loss: 0.5983 - val_accuracy: 0.8281

Epoch 00020: val_accuracy improved from 0.78125 to 0.82812, saving model to models\best_model_esc10_exp_0_11
Test accuracy:  0.7749999761581421
Epoch 1/20
4/4 [==============================] - 7s 2s/step - loss: 2.3550 - accuracy: 0.1005 - val_loss: 2.1828 - val_accuracy: 0.2500

Epoch 00001: val_accuracy improved from -inf to 0.25000, saving model to models\best_model_esc10_exp_0_12
Epoch 2/20
4/4 [==============================] - 7s 2s/step - loss: 2.1567 - accuracy: 0.2188 - val_loss: 1.8570 - val_accuracy: 0.2969

Epoch 00002: val_accuracy improved from 0.25000 to 0.29688, saving model to models\best_model_esc10_exp_0_12
Epoch 3/20
4/4 [==============================] - 6s 2s/step - loss: 1.9985 - accuracy: 0.2568 - val_loss: 1.7115 - val_accuracy: 0.4688

Epoch 00003: val_accuracy improved from 0.29688 to 0.46875, saving model to models\best_model_esc10_exp_0_12
Epoch 4/20
4/4 [==============================] - 7s 2s/step - loss: 1.7112 - accuracy: 0.3786 - val_loss: 1.4098 - val_accuracy: 0.5156

Epoch 00004: val_accuracy improved from 0.46875 to 0.51562, saving model to models\best_model_esc10_exp_0_12
Epoch 5/20
4/4 [==============================] - 6s 2s/step - loss: 1.6124 - accuracy: 0.4240 - val_loss: 1.2296 - val_accuracy: 0.5938

Epoch 00005: val_accuracy improved from 0.51562 to 0.59375, saving model to models\best_model_esc10_exp_0_12
Epoch 6/20
4/4 [==============================] - 6s 2s/step - loss: 1.4094 - accuracy: 0.4896 - val_loss: 1.1873 - val_accuracy: 0.6406

Epoch 00006: val_accuracy improved from 0.59375 to 0.64062, saving model to models\best_model_esc10_exp_0_12
Epoch 7/20
4/4 [==============================] - 6s 2s/step - loss: 1.2445 - accuracy: 0.5406 - val_loss: 1.1012 - val_accuracy: 0.6406

Epoch 00007: val_accuracy did not improve from 0.64062
Epoch 8/20
4/4 [==============================] - 6s 2s/step - loss: 1.1879 - accuracy: 0.5469 - val_loss: 0.9887 - val_accuracy: 0.6562

Epoch 00008: val_accuracy improved from 0.64062 to 0.65625, saving model to models\best_model_esc10_exp_0_12
Epoch 9/20
4/4 [==============================] - 6s 2s/step - loss: 0.9710 - accuracy: 0.6698 - val_loss: 1.0842 - val_accuracy: 0.6094

Epoch 00009: val_accuracy did not improve from 0.65625
Epoch 10/20
4/4 [==============================] - 6s 1s/step - loss: 0.9825 - accuracy: 0.6255 - val_loss: 0.8716 - val_accuracy: 0.6406

Epoch 00010: val_accuracy did not improve from 0.65625
Epoch 11/20
4/4 [==============================] - 6s 1s/step - loss: 1.0117 - accuracy: 0.6755 - val_loss: 0.8518 - val_accuracy: 0.6406

Epoch 00011: val_accuracy did not improve from 0.65625
Epoch 12/20
4/4 [==============================] - 6s 1s/step - loss: 0.8587 - accuracy: 0.6995 - val_loss: 0.8819 - val_accuracy: 0.6562

Epoch 00012: val_accuracy did not improve from 0.65625
Epoch 13/20
4/4 [==============================] - 6s 2s/step - loss: 0.7314 - accuracy: 0.7391 - val_loss: 0.8048 - val_accuracy: 0.7344

Epoch 00013: val_accuracy improved from 0.65625 to 0.73438, saving model to models\best_model_esc10_exp_0_12
Epoch 14/20
4/4 [==============================] - 6s 2s/step - loss: 0.7034 - accuracy: 0.7453 - val_loss: 0.7977 - val_accuracy: 0.7031

Epoch 00014: val_accuracy did not improve from 0.73438
Epoch 15/20
4/4 [==============================] - 7s 2s/step - loss: 0.5724 - accuracy: 0.7870 - val_loss: 0.7183 - val_accuracy: 0.7188

Epoch 00015: val_accuracy did not improve from 0.73438
Epoch 16/20
4/4 [==============================] - 6s 2s/step - loss: 0.5086 - accuracy: 0.8375 - val_loss: 0.6964 - val_accuracy: 0.7812

Epoch 00016: val_accuracy improved from 0.73438 to 0.78125, saving model to models\best_model_esc10_exp_0_12
Epoch 17/20
4/4 [==============================] - 7s 2s/step - loss: 0.5046 - accuracy: 0.8078 - val_loss: 0.7367 - val_accuracy: 0.7812

Epoch 00017: val_accuracy did not improve from 0.78125
Epoch 18/20
4/4 [==============================] - 6s 2s/step - loss: 0.4994 - accuracy: 0.8057 - val_loss: 0.7560 - val_accuracy: 0.7656

Epoch 00018: val_accuracy did not improve from 0.78125
Epoch 19/20
4/4 [==============================] - 6s 2s/step - loss: 0.4857 - accuracy: 0.7854 - val_loss: 0.6489 - val_accuracy: 0.7812

Epoch 00019: val_accuracy did not improve from 0.78125
Epoch 20/20
4/4 [==============================] - 7s 2s/step - loss: 0.3764 - accuracy: 0.8849 - val_loss: 0.7335 - val_accuracy: 0.8281

Epoch 00020: val_accuracy improved from 0.78125 to 0.82812, saving model to models\best_model_esc10_exp_0_12
Test accuracy:  0.7875000238418579
Epoch 1/20
4/4 [==============================] - 7s 2s/step - loss: 2.3033 - accuracy: 0.1307 - val_loss: 2.1638 - val_accuracy: 0.0938

Epoch 00001: val_accuracy improved from -inf to 0.09375, saving model to models\best_model_esc10_exp_0_13
Epoch 2/20
4/4 [==============================] - 6s 2s/step - loss: 2.1712 - accuracy: 0.1651 - val_loss: 2.0315 - val_accuracy: 0.3281

Epoch 00002: val_accuracy improved from 0.09375 to 0.32812, saving model to models\best_model_esc10_exp_0_13
Epoch 3/20
4/4 [==============================] - 6s 2s/step - loss: 2.0737 - accuracy: 0.2240 - val_loss: 1.8035 - val_accuracy: 0.3906

Epoch 00003: val_accuracy improved from 0.32812 to 0.39062, saving model to models\best_model_esc10_exp_0_13
Epoch 4/20
4/4 [==============================] - 6s 2s/step - loss: 1.8628 - accuracy: 0.3156 - val_loss: 1.5870 - val_accuracy: 0.4375

Epoch 00004: val_accuracy improved from 0.39062 to 0.43750, saving model to models\best_model_esc10_exp_0_13
Epoch 5/20
4/4 [==============================] - 6s 2s/step - loss: 1.6325 - accuracy: 0.3724 - val_loss: 1.3561 - val_accuracy: 0.6094

Epoch 00005: val_accuracy improved from 0.43750 to 0.60938, saving model to models\best_model_esc10_exp_0_13
Epoch 6/20
4/4 [==============================] - 6s 2s/step - loss: 1.5202 - accuracy: 0.4500 - val_loss: 1.2910 - val_accuracy: 0.6250

Epoch 00006: val_accuracy improved from 0.60938 to 0.62500, saving model to models\best_model_esc10_exp_0_13
Epoch 7/20
4/4 [==============================] - 7s 2s/step - loss: 1.2943 - accuracy: 0.5620 - val_loss: 1.1130 - val_accuracy: 0.5781

Epoch 00007: val_accuracy did not improve from 0.62500
Epoch 8/20
4/4 [==============================] - 6s 2s/step - loss: 1.0874 - accuracy: 0.6005 - val_loss: 1.1301 - val_accuracy: 0.6562

Epoch 00008: val_accuracy improved from 0.62500 to 0.65625, saving model to models\best_model_esc10_exp_0_13
Epoch 9/20
4/4 [==============================] - 6s 2s/step - loss: 1.0987 - accuracy: 0.5693 - val_loss: 0.9629 - val_accuracy: 0.6719

Epoch 00009: val_accuracy improved from 0.65625 to 0.67188, saving model to models\best_model_esc10_exp_0_13
Epoch 10/20
4/4 [==============================] - 6s 2s/step - loss: 0.9812 - accuracy: 0.6693 - val_loss: 0.9343 - val_accuracy: 0.6562

Epoch 00010: val_accuracy did not improve from 0.67188
Epoch 11/20
4/4 [==============================] - 6s 2s/step - loss: 0.8149 - accuracy: 0.7172 - val_loss: 1.0287 - val_accuracy: 0.6719

Epoch 00011: val_accuracy did not improve from 0.67188
Epoch 12/20
4/4 [==============================] - 6s 2s/step - loss: 0.8596 - accuracy: 0.6932 - val_loss: 0.8397 - val_accuracy: 0.6875

Epoch 00012: val_accuracy improved from 0.67188 to 0.68750, saving model to models\best_model_esc10_exp_0_13
Epoch 13/20
4/4 [==============================] - 6s 1s/step - loss: 0.7702 - accuracy: 0.7339 - val_loss: 0.8256 - val_accuracy: 0.6875

Epoch 00013: val_accuracy did not improve from 0.68750
Epoch 14/20
4/4 [==============================] - 6s 1s/step - loss: 0.7472 - accuracy: 0.7448 - val_loss: 0.7932 - val_accuracy: 0.7344

Epoch 00014: val_accuracy improved from 0.68750 to 0.73438, saving model to models\best_model_esc10_exp_0_13
Epoch 15/20
4/4 [==============================] - 5s 1s/step - loss: 0.6874 - accuracy: 0.7734 - val_loss: 0.7247 - val_accuracy: 0.7656

Epoch 00015: val_accuracy improved from 0.73438 to 0.76562, saving model to models\best_model_esc10_exp_0_13
Epoch 16/20
4/4 [==============================] - 6s 1s/step - loss: 0.6030 - accuracy: 0.7693 - val_loss: 0.8743 - val_accuracy: 0.7031

Epoch 00016: val_accuracy did not improve from 0.76562
Epoch 17/20
4/4 [==============================] - 6s 2s/step - loss: 0.6444 - accuracy: 0.7745 - val_loss: 0.7885 - val_accuracy: 0.7656

Epoch 00017: val_accuracy did not improve from 0.76562
Epoch 18/20
4/4 [==============================] - 6s 1s/step - loss: 0.5581 - accuracy: 0.7812 - val_loss: 0.6809 - val_accuracy: 0.7656

Epoch 00018: val_accuracy did not improve from 0.76562
Epoch 19/20
4/4 [==============================] - 6s 1s/step - loss: 0.5115 - accuracy: 0.8240 - val_loss: 0.6264 - val_accuracy: 0.7969

Epoch 00019: val_accuracy improved from 0.76562 to 0.79688, saving model to models\best_model_esc10_exp_0_13
Epoch 20/20
4/4 [==============================] - 6s 2s/step - loss: 0.4186 - accuracy: 0.8708 - val_loss: 0.6817 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.79688
Test accuracy:  0.800000011920929
Epoch 1/20
4/4 [==============================] - 6s 2s/step - loss: 2.3179 - accuracy: 0.0839 - val_loss: 2.1996 - val_accuracy: 0.2031

Epoch 00001: val_accuracy improved from -inf to 0.20312, saving model to models\best_model_esc10_exp_0_14
Epoch 2/20
4/4 [==============================] - 6s 1s/step - loss: 2.1599 - accuracy: 0.1880 - val_loss: 1.9349 - val_accuracy: 0.3438

Epoch 00002: val_accuracy improved from 0.20312 to 0.34375, saving model to models\best_model_esc10_exp_0_14
Epoch 3/20
4/4 [==============================] - 6s 1s/step - loss: 1.9340 - accuracy: 0.2240 - val_loss: 1.7405 - val_accuracy: 0.4219

Epoch 00003: val_accuracy improved from 0.34375 to 0.42188, saving model to models\best_model_esc10_exp_0_14
Epoch 4/20
4/4 [==============================] - 6s 1s/step - loss: 1.7602 - accuracy: 0.3557 - val_loss: 1.5677 - val_accuracy: 0.4062

Epoch 00004: val_accuracy did not improve from 0.42188
Epoch 5/20
4/4 [==============================] - 6s 2s/step - loss: 1.6229 - accuracy: 0.4323 - val_loss: 1.3310 - val_accuracy: 0.4844

Epoch 00005: val_accuracy improved from 0.42188 to 0.48438, saving model to models\best_model_esc10_exp_0_14
Epoch 6/20
4/4 [==============================] - 6s 2s/step - loss: 1.4596 - accuracy: 0.4568 - val_loss: 1.1781 - val_accuracy: 0.6094

Epoch 00006: val_accuracy improved from 0.48438 to 0.60938, saving model to models\best_model_esc10_exp_0_14
Epoch 7/20
4/4 [==============================] - 6s 1s/step - loss: 1.3097 - accuracy: 0.5125 - val_loss: 1.1272 - val_accuracy: 0.6094

Epoch 00007: val_accuracy did not improve from 0.60938
Epoch 8/20
4/4 [==============================] - 6s 2s/step - loss: 1.2123 - accuracy: 0.5625 - val_loss: 0.9645 - val_accuracy: 0.7344

Epoch 00008: val_accuracy improved from 0.60938 to 0.73438, saving model to models\best_model_esc10_exp_0_14
Epoch 9/20
4/4 [==============================] - 6s 2s/step - loss: 0.9771 - accuracy: 0.6318 - val_loss: 0.9605 - val_accuracy: 0.7188

Epoch 00009: val_accuracy did not improve from 0.73438
Epoch 10/20
4/4 [==============================] - 6s 2s/step - loss: 1.0441 - accuracy: 0.6344 - val_loss: 0.8840 - val_accuracy: 0.7188

Epoch 00010: val_accuracy did not improve from 0.73438
Epoch 11/20
4/4 [==============================] - 6s 1s/step - loss: 0.9789 - accuracy: 0.6458 - val_loss: 0.8264 - val_accuracy: 0.7500

Epoch 00011: val_accuracy improved from 0.73438 to 0.75000, saving model to models\best_model_esc10_exp_0_14
Epoch 12/20
4/4 [==============================] - 6s 1s/step - loss: 0.8046 - accuracy: 0.7005 - val_loss: 0.8421 - val_accuracy: 0.7500

Epoch 00012: val_accuracy did not improve from 0.75000
Epoch 13/20
4/4 [==============================] - 6s 1s/step - loss: 0.7470 - accuracy: 0.7635 - val_loss: 0.7763 - val_accuracy: 0.7500

Epoch 00013: val_accuracy did not improve from 0.75000
Epoch 14/20
4/4 [==============================] - 6s 1s/step - loss: 0.6856 - accuracy: 0.7547 - val_loss: 0.8602 - val_accuracy: 0.7500

Epoch 00014: val_accuracy did not improve from 0.75000
Epoch 15/20
4/4 [==============================] - 6s 1s/step - loss: 0.6612 - accuracy: 0.7839 - val_loss: 0.7463 - val_accuracy: 0.7500

Epoch 00015: val_accuracy did not improve from 0.75000
Epoch 16/20
4/4 [==============================] - 5s 1s/step - loss: 0.5562 - accuracy: 0.7911 - val_loss: 0.6722 - val_accuracy: 0.8125

Epoch 00016: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_0_14
Epoch 17/20
4/4 [==============================] - 5s 1s/step - loss: 0.5763 - accuracy: 0.7995 - val_loss: 0.7569 - val_accuracy: 0.7812

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
4/4 [==============================] - 6s 1s/step - loss: 0.5459 - accuracy: 0.7984 - val_loss: 0.8542 - val_accuracy: 0.6875

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
4/4 [==============================] - 6s 2s/step - loss: 0.5312 - accuracy: 0.8167 - val_loss: 0.8776 - val_accuracy: 0.7344

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
4/4 [==============================] - 6s 1s/step - loss: 0.3967 - accuracy: 0.8672 - val_loss: 0.8335 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.81250
Test accuracy:  0.6875
Epoch 1/20
4/4 [==============================] - 6s 2s/step - loss: 2.2985 - accuracy: 0.1589 - val_loss: 2.0513 - val_accuracy: 0.2188

Epoch 00001: val_accuracy improved from -inf to 0.21875, saving model to models\best_model_esc10_exp_0_15
Epoch 2/20
4/4 [==============================] - 6s 1s/step - loss: 2.1205 - accuracy: 0.2359 - val_loss: 1.8466 - val_accuracy: 0.3438

Epoch 00002: val_accuracy improved from 0.21875 to 0.34375, saving model to models\best_model_esc10_exp_0_15
Epoch 3/20
4/4 [==============================] - 6s 1s/step - loss: 1.8761 - accuracy: 0.3125 - val_loss: 1.6087 - val_accuracy: 0.4531

Epoch 00003: val_accuracy improved from 0.34375 to 0.45312, saving model to models\best_model_esc10_exp_0_15
Epoch 4/20
4/4 [==============================] - 6s 2s/step - loss: 1.6905 - accuracy: 0.3719 - val_loss: 1.4819 - val_accuracy: 0.5625

Epoch 00004: val_accuracy improved from 0.45312 to 0.56250, saving model to models\best_model_esc10_exp_0_15
Epoch 5/20
4/4 [==============================] - 6s 1s/step - loss: 1.6031 - accuracy: 0.4365 - val_loss: 1.2572 - val_accuracy: 0.5625

Epoch 00005: val_accuracy did not improve from 0.56250
Epoch 6/20
4/4 [==============================] - 6s 1s/step - loss: 1.2572 - accuracy: 0.5760 - val_loss: 1.2224 - val_accuracy: 0.5625

Epoch 00006: val_accuracy did not improve from 0.56250
Epoch 7/20
4/4 [==============================] - 6s 2s/step - loss: 1.3546 - accuracy: 0.5146 - val_loss: 1.0550 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.56250 to 0.70312, saving model to models\best_model_esc10_exp_0_15
Epoch 8/20
4/4 [==============================] - 6s 1s/step - loss: 1.0702 - accuracy: 0.6172 - val_loss: 1.0571 - val_accuracy: 0.6562

Epoch 00008: val_accuracy did not improve from 0.70312
Epoch 9/20
4/4 [==============================] - 6s 1s/step - loss: 1.0247 - accuracy: 0.6760 - val_loss: 0.8985 - val_accuracy: 0.6875

Epoch 00009: val_accuracy did not improve from 0.70312
Epoch 10/20
4/4 [==============================] - 6s 1s/step - loss: 0.9000 - accuracy: 0.6802 - val_loss: 0.8430 - val_accuracy: 0.7500

Epoch 00010: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_0_15
Epoch 11/20
4/4 [==============================] - 6s 1s/step - loss: 0.7967 - accuracy: 0.7208 - val_loss: 0.7604 - val_accuracy: 0.7344

Epoch 00011: val_accuracy did not improve from 0.75000
Epoch 12/20
4/4 [==============================] - 6s 2s/step - loss: 0.7970 - accuracy: 0.7167 - val_loss: 0.7081 - val_accuracy: 0.7500

Epoch 00012: val_accuracy did not improve from 0.75000
Epoch 13/20
4/4 [==============================] - 6s 1s/step - loss: 0.6294 - accuracy: 0.7906 - val_loss: 0.6939 - val_accuracy: 0.7344

Epoch 00013: val_accuracy did not improve from 0.75000
Epoch 14/20
4/4 [==============================] - 6s 1s/step - loss: 0.5912 - accuracy: 0.7870 - val_loss: 0.7371 - val_accuracy: 0.7500

Epoch 00014: val_accuracy did not improve from 0.75000
Epoch 15/20
4/4 [==============================] - 6s 2s/step - loss: 0.5421 - accuracy: 0.8203 - val_loss: 0.8034 - val_accuracy: 0.7344

Epoch 00015: val_accuracy did not improve from 0.75000
Epoch 16/20
4/4 [==============================] - 6s 1s/step - loss: 0.4589 - accuracy: 0.8943 - val_loss: 0.6480 - val_accuracy: 0.7969

Epoch 00016: val_accuracy improved from 0.75000 to 0.79688, saving model to models\best_model_esc10_exp_0_15
Epoch 17/20
4/4 [==============================] - 6s 1s/step - loss: 0.5039 - accuracy: 0.8089 - val_loss: 0.7735 - val_accuracy: 0.6875

Epoch 00017: val_accuracy did not improve from 0.79688
Epoch 18/20
4/4 [==============================] - 6s 1s/step - loss: 0.4981 - accuracy: 0.8036 - val_loss: 0.7448 - val_accuracy: 0.7812

Epoch 00018: val_accuracy did not improve from 0.79688
Epoch 19/20
4/4 [==============================] - 5s 1s/step - loss: 0.4617 - accuracy: 0.8396 - val_loss: 0.5723 - val_accuracy: 0.8281

Epoch 00019: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_0_15
Epoch 20/20
4/4 [==============================] - 5s 1s/step - loss: 0.3173 - accuracy: 0.8943 - val_loss: 0.5720 - val_accuracy: 0.7656

Epoch 00020: val_accuracy did not improve from 0.82812
Test accuracy:  0.800000011920929
Epoch 1/20
4/4 [==============================] - 6s 1s/step - loss: 2.3028 - accuracy: 0.1500 - val_loss: 2.0971 - val_accuracy: 0.2188

Epoch 00001: val_accuracy improved from -inf to 0.21875, saving model to models\best_model_esc10_exp_0_16
Epoch 2/20
4/4 [==============================] - 6s 2s/step - loss: 2.0875 - accuracy: 0.2083 - val_loss: 1.8228 - val_accuracy: 0.3438

Epoch 00002: val_accuracy improved from 0.21875 to 0.34375, saving model to models\best_model_esc10_exp_0_16
Epoch 3/20
4/4 [==============================] - 6s 2s/step - loss: 1.9166 - accuracy: 0.2594 - val_loss: 1.5969 - val_accuracy: 0.4375

Epoch 00003: val_accuracy improved from 0.34375 to 0.43750, saving model to models\best_model_esc10_exp_0_16
Epoch 4/20
4/4 [==============================] - 6s 1s/step - loss: 1.7490 - accuracy: 0.3339 - val_loss: 1.4382 - val_accuracy: 0.5156

Epoch 00004: val_accuracy improved from 0.43750 to 0.51562, saving model to models\best_model_esc10_exp_0_16
Epoch 5/20
4/4 [==============================] - 6s 1s/step - loss: 1.5109 - accuracy: 0.4401 - val_loss: 1.2795 - val_accuracy: 0.5625

Epoch 00005: val_accuracy improved from 0.51562 to 0.56250, saving model to models\best_model_esc10_exp_0_16
Epoch 6/20
4/4 [==============================] - 6s 2s/step - loss: 1.4085 - accuracy: 0.4833 - val_loss: 1.2415 - val_accuracy: 0.6875

Epoch 00006: val_accuracy improved from 0.56250 to 0.68750, saving model to models\best_model_esc10_exp_0_16
Epoch 7/20
4/4 [==============================] - 6s 2s/step - loss: 1.4020 - accuracy: 0.4703 - val_loss: 1.2475 - val_accuracy: 0.6094

Epoch 00007: val_accuracy did not improve from 0.68750
Epoch 8/20
4/4 [==============================] - 6s 2s/step - loss: 1.1773 - accuracy: 0.5823 - val_loss: 1.0777 - val_accuracy: 0.7031

Epoch 00008: val_accuracy improved from 0.68750 to 0.70312, saving model to models\best_model_esc10_exp_0_16
Epoch 9/20
4/4 [==============================] - 6s 2s/step - loss: 1.0372 - accuracy: 0.6693 - val_loss: 0.9902 - val_accuracy: 0.6562

Epoch 00009: val_accuracy did not improve from 0.70312
Epoch 10/20
4/4 [==============================] - 6s 2s/step - loss: 0.9266 - accuracy: 0.6682 - val_loss: 0.9342 - val_accuracy: 0.7500

Epoch 00010: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_0_16
Epoch 11/20
4/4 [==============================] - 6s 2s/step - loss: 0.9025 - accuracy: 0.7115 - val_loss: 0.7600 - val_accuracy: 0.7656

Epoch 00011: val_accuracy improved from 0.75000 to 0.76562, saving model to models\best_model_esc10_exp_0_16
Epoch 12/20
4/4 [==============================] - 6s 2s/step - loss: 0.8145 - accuracy: 0.6875 - val_loss: 0.8784 - val_accuracy: 0.6875

Epoch 00012: val_accuracy did not improve from 0.76562
Epoch 13/20
4/4 [==============================] - 6s 2s/step - loss: 0.7346 - accuracy: 0.7635 - val_loss: 0.7229 - val_accuracy: 0.7812

Epoch 00013: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_0_16
Epoch 14/20
4/4 [==============================] - 6s 2s/step - loss: 0.6360 - accuracy: 0.7797 - val_loss: 0.6976 - val_accuracy: 0.7812

Epoch 00014: val_accuracy did not improve from 0.78125
Epoch 15/20
4/4 [==============================] - 6s 1s/step - loss: 0.6222 - accuracy: 0.7802 - val_loss: 0.7324 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.78125
Epoch 16/20
4/4 [==============================] - 6s 1s/step - loss: 0.4957 - accuracy: 0.8016 - val_loss: 0.6348 - val_accuracy: 0.7969

Epoch 00016: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_0_16
Epoch 17/20
4/4 [==============================] - 6s 1s/step - loss: 0.5234 - accuracy: 0.7948 - val_loss: 0.7187 - val_accuracy: 0.7656

Epoch 00017: val_accuracy did not improve from 0.79688
Epoch 18/20
4/4 [==============================] - 6s 2s/step - loss: 0.5393 - accuracy: 0.7995 - val_loss: 0.6982 - val_accuracy: 0.7812

Epoch 00018: val_accuracy did not improve from 0.79688
Epoch 19/20
4/4 [==============================] - 6s 2s/step - loss: 0.5044 - accuracy: 0.7875 - val_loss: 0.5976 - val_accuracy: 0.8125

Epoch 00019: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_0_16
Epoch 20/20
4/4 [==============================] - 6s 1s/step - loss: 0.5155 - accuracy: 0.8297 - val_loss: 0.7645 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.81250
Test accuracy:  0.800000011920929
Epoch 1/20
4/4 [==============================] - 8s 2s/step - loss: 2.2877 - accuracy: 0.1365 - val_loss: 2.0336 - val_accuracy: 0.2812

Epoch 00001: val_accuracy improved from -inf to 0.28125, saving model to models\best_model_esc10_exp_0_17
Epoch 2/20
4/4 [==============================] - 8s 2s/step - loss: 2.0002 - accuracy: 0.2542 - val_loss: 1.7227 - val_accuracy: 0.3594

Epoch 00002: val_accuracy improved from 0.28125 to 0.35938, saving model to models\best_model_esc10_exp_0_17
Epoch 3/20
4/4 [==============================] - 7s 2s/step - loss: 1.8037 - accuracy: 0.3099 - val_loss: 1.5774 - val_accuracy: 0.4844

Epoch 00003: val_accuracy improved from 0.35938 to 0.48438, saving model to models\best_model_esc10_exp_0_17
Epoch 4/20
4/4 [==============================] - 7s 2s/step - loss: 1.6787 - accuracy: 0.3807 - val_loss: 1.3757 - val_accuracy: 0.5938

Epoch 00004: val_accuracy improved from 0.48438 to 0.59375, saving model to models\best_model_esc10_exp_0_17
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.4986 - accuracy: 0.4948 - val_loss: 1.3742 - val_accuracy: 0.5312

Epoch 00005: val_accuracy did not improve from 0.59375
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.3828 - accuracy: 0.5094 - val_loss: 1.2296 - val_accuracy: 0.5312

Epoch 00006: val_accuracy did not improve from 0.59375
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.1235 - accuracy: 0.6130 - val_loss: 1.0132 - val_accuracy: 0.6875

Epoch 00007: val_accuracy improved from 0.59375 to 0.68750, saving model to models\best_model_esc10_exp_0_17
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.0926 - accuracy: 0.5995 - val_loss: 0.9732 - val_accuracy: 0.6562

Epoch 00008: val_accuracy did not improve from 0.68750
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 0.9685 - accuracy: 0.6807 - val_loss: 0.9711 - val_accuracy: 0.7188

Epoch 00009: val_accuracy improved from 0.68750 to 0.71875, saving model to models\best_model_esc10_exp_0_17
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 0.8447 - accuracy: 0.7453 - val_loss: 0.8657 - val_accuracy: 0.7344

Epoch 00010: val_accuracy improved from 0.71875 to 0.73438, saving model to models\best_model_esc10_exp_0_17
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.7439 - accuracy: 0.7615 - val_loss: 0.8985 - val_accuracy: 0.6406

Epoch 00011: val_accuracy did not improve from 0.73438
Epoch 12/20
4/4 [==============================] - 8s 2s/step - loss: 0.7309 - accuracy: 0.7599 - val_loss: 0.7785 - val_accuracy: 0.7344

Epoch 00012: val_accuracy did not improve from 0.73438
Epoch 13/20
4/4 [==============================] - 8s 2s/step - loss: 0.6148 - accuracy: 0.7922 - val_loss: 0.7988 - val_accuracy: 0.7500

Epoch 00013: val_accuracy improved from 0.73438 to 0.75000, saving model to models\best_model_esc10_exp_0_17
Epoch 14/20
4/4 [==============================] - 8s 2s/step - loss: 0.5707 - accuracy: 0.7979 - val_loss: 0.7409 - val_accuracy: 0.7188

Epoch 00014: val_accuracy did not improve from 0.75000
Epoch 15/20
4/4 [==============================] - 7s 2s/step - loss: 0.4742 - accuracy: 0.8141 - val_loss: 0.6966 - val_accuracy: 0.7656

Epoch 00015: val_accuracy improved from 0.75000 to 0.76562, saving model to models\best_model_esc10_exp_0_17
Epoch 16/20
4/4 [==============================] - 7s 2s/step - loss: 0.4363 - accuracy: 0.8479 - val_loss: 0.7807 - val_accuracy: 0.7812

Epoch 00016: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_0_17
Epoch 17/20
4/4 [==============================] - 7s 2s/step - loss: 0.4130 - accuracy: 0.8651 - val_loss: 0.7261 - val_accuracy: 0.7500

Epoch 00017: val_accuracy did not improve from 0.78125
Epoch 18/20
4/4 [==============================] - 7s 2s/step - loss: 0.4191 - accuracy: 0.8672 - val_loss: 0.6610 - val_accuracy: 0.7812

Epoch 00018: val_accuracy did not improve from 0.78125
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.3367 - accuracy: 0.8875 - val_loss: 0.7278 - val_accuracy: 0.7969

Epoch 00019: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_0_17
Epoch 20/20
4/4 [==============================] - 8s 2s/step - loss: 0.3261 - accuracy: 0.8729 - val_loss: 0.7254 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.79688
Test accuracy:  0.737500011920929
Epoch 1/20
4/4 [==============================] - 9s 2s/step - loss: 2.3447 - accuracy: 0.0781 - val_loss: 2.2069 - val_accuracy: 0.2500

Epoch 00001: val_accuracy improved from -inf to 0.25000, saving model to models\best_model_esc10_exp_0_18
Epoch 2/20
4/4 [==============================] - 9s 2s/step - loss: 2.1891 - accuracy: 0.2021 - val_loss: 1.9713 - val_accuracy: 0.3281

Epoch 00002: val_accuracy improved from 0.25000 to 0.32812, saving model to models\best_model_esc10_exp_0_18
Epoch 3/20
4/4 [==============================] - 9s 2s/step - loss: 2.0430 - accuracy: 0.2240 - val_loss: 1.7653 - val_accuracy: 0.3438

Epoch 00003: val_accuracy improved from 0.32812 to 0.34375, saving model to models\best_model_esc10_exp_0_18
Epoch 4/20
4/4 [==============================] - 9s 2s/step - loss: 1.8307 - accuracy: 0.2776 - val_loss: 1.6765 - val_accuracy: 0.4062

Epoch 00004: val_accuracy improved from 0.34375 to 0.40625, saving model to models\best_model_esc10_exp_0_18
Epoch 5/20
4/4 [==============================] - 9s 2s/step - loss: 1.7728 - accuracy: 0.3536 - val_loss: 1.4838 - val_accuracy: 0.5312

Epoch 00005: val_accuracy improved from 0.40625 to 0.53125, saving model to models\best_model_esc10_exp_0_18
Epoch 6/20
4/4 [==============================] - 9s 2s/step - loss: 1.6384 - accuracy: 0.4214 - val_loss: 1.4681 - val_accuracy: 0.5156

Epoch 00006: val_accuracy did not improve from 0.53125
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.4533 - accuracy: 0.4432 - val_loss: 1.3599 - val_accuracy: 0.4688

Epoch 00007: val_accuracy did not improve from 0.53125
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.4120 - accuracy: 0.4719 - val_loss: 1.1841 - val_accuracy: 0.6250

Epoch 00008: val_accuracy improved from 0.53125 to 0.62500, saving model to models\best_model_esc10_exp_0_18
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 1.2375 - accuracy: 0.5505 - val_loss: 1.1100 - val_accuracy: 0.6562

Epoch 00009: val_accuracy improved from 0.62500 to 0.65625, saving model to models\best_model_esc10_exp_0_18
Epoch 10/20
4/4 [==============================] - 8s 2s/step - loss: 1.1774 - accuracy: 0.6036 - val_loss: 1.0971 - val_accuracy: 0.5938

Epoch 00010: val_accuracy did not improve from 0.65625
Epoch 11/20
4/4 [==============================] - 8s 2s/step - loss: 0.9907 - accuracy: 0.6620 - val_loss: 1.0171 - val_accuracy: 0.6875

Epoch 00011: val_accuracy improved from 0.65625 to 0.68750, saving model to models\best_model_esc10_exp_0_18
Epoch 12/20
4/4 [==============================] - 8s 2s/step - loss: 1.0073 - accuracy: 0.6521 - val_loss: 0.8975 - val_accuracy: 0.6719

Epoch 00012: val_accuracy did not improve from 0.68750
Epoch 13/20
4/4 [==============================] - 9s 2s/step - loss: 0.8150 - accuracy: 0.7339 - val_loss: 0.8815 - val_accuracy: 0.7188

Epoch 00013: val_accuracy improved from 0.68750 to 0.71875, saving model to models\best_model_esc10_exp_0_18
Epoch 14/20
4/4 [==============================] - 9s 2s/step - loss: 0.7540 - accuracy: 0.7323 - val_loss: 0.9051 - val_accuracy: 0.6719

Epoch 00014: val_accuracy did not improve from 0.71875
Epoch 15/20
4/4 [==============================] - 9s 2s/step - loss: 0.6820 - accuracy: 0.7500 - val_loss: 0.8158 - val_accuracy: 0.7188

Epoch 00015: val_accuracy did not improve from 0.71875
Epoch 16/20
4/4 [==============================] - 10s 3s/step - loss: 0.6320 - accuracy: 0.7917 - val_loss: 0.6947 - val_accuracy: 0.7656

Epoch 00016: val_accuracy improved from 0.71875 to 0.76562, saving model to models\best_model_esc10_exp_0_18
Epoch 17/20
4/4 [==============================] - 10s 2s/step - loss: 0.5313 - accuracy: 0.8016 - val_loss: 0.8137 - val_accuracy: 0.7656

Epoch 00017: val_accuracy did not improve from 0.76562
Epoch 18/20
4/4 [==============================] - 9s 2s/step - loss: 0.5193 - accuracy: 0.8234 - val_loss: 0.6439 - val_accuracy: 0.8125

Epoch 00018: val_accuracy improved from 0.76562 to 0.81250, saving model to models\best_model_esc10_exp_0_18
Epoch 19/20
4/4 [==============================] - 8s 2s/step - loss: 0.5353 - accuracy: 0.8177 - val_loss: 0.7259 - val_accuracy: 0.7812

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
4/4 [==============================] - 9s 2s/step - loss: 0.5804 - accuracy: 0.7995 - val_loss: 0.6666 - val_accuracy: 0.8281

Epoch 00020: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_0_18
Test accuracy:  0.8125
Epoch 1/20
4/4 [==============================] - 9s 2s/step - loss: 2.2960 - accuracy: 0.1661 - val_loss: 1.9787 - val_accuracy: 0.3125

Epoch 00001: val_accuracy improved from -inf to 0.31250, saving model to models\best_model_esc10_exp_0_19
Epoch 2/20
4/4 [==============================] - 9s 2s/step - loss: 2.0019 - accuracy: 0.2109 - val_loss: 1.7578 - val_accuracy: 0.3750

Epoch 00002: val_accuracy improved from 0.31250 to 0.37500, saving model to models\best_model_esc10_exp_0_19
Epoch 3/20
4/4 [==============================] - 8s 2s/step - loss: 1.8295 - accuracy: 0.2911 - val_loss: 1.6726 - val_accuracy: 0.4219

Epoch 00003: val_accuracy improved from 0.37500 to 0.42188, saving model to models\best_model_esc10_exp_0_19
Epoch 4/20
4/4 [==============================] - 8s 2s/step - loss: 1.7463 - accuracy: 0.3354 - val_loss: 1.4076 - val_accuracy: 0.5156

Epoch 00004: val_accuracy improved from 0.42188 to 0.51562, saving model to models\best_model_esc10_exp_0_19
Epoch 5/20
4/4 [==============================] - 8s 2s/step - loss: 1.6008 - accuracy: 0.4313 - val_loss: 1.3416 - val_accuracy: 0.6094

Epoch 00005: val_accuracy improved from 0.51562 to 0.60938, saving model to models\best_model_esc10_exp_0_19
Epoch 6/20
4/4 [==============================] - 8s 2s/step - loss: 1.4225 - accuracy: 0.4365 - val_loss: 1.1199 - val_accuracy: 0.6719

Epoch 00006: val_accuracy improved from 0.60938 to 0.67188, saving model to models\best_model_esc10_exp_0_19
Epoch 7/20
4/4 [==============================] - 8s 2s/step - loss: 1.1810 - accuracy: 0.5839 - val_loss: 0.9405 - val_accuracy: 0.7188

Epoch 00007: val_accuracy improved from 0.67188 to 0.71875, saving model to models\best_model_esc10_exp_0_19
Epoch 8/20
4/4 [==============================] - 8s 2s/step - loss: 1.0824 - accuracy: 0.5995 - val_loss: 0.8690 - val_accuracy: 0.6719

Epoch 00008: val_accuracy did not improve from 0.71875
Epoch 9/20
4/4 [==============================] - 8s 2s/step - loss: 0.9842 - accuracy: 0.6302 - val_loss: 0.8490 - val_accuracy: 0.7031

Epoch 00009: val_accuracy did not improve from 0.71875
Epoch 10/20
4/4 [==============================] - 9s 2s/step - loss: 0.8898 - accuracy: 0.6969 - val_loss: 0.6814 - val_accuracy: 0.7812

Epoch 00010: val_accuracy improved from 0.71875 to 0.78125, saving model to models\best_model_esc10_exp_0_19
Epoch 11/20
4/4 [==============================] - 9s 2s/step - loss: 0.8497 - accuracy: 0.7052 - val_loss: 0.7422 - val_accuracy: 0.7812

Epoch 00011: val_accuracy did not improve from 0.78125
Epoch 12/20
4/4 [==============================] - 9s 2s/step - loss: 0.8098 - accuracy: 0.7120 - val_loss: 0.6726 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.78125
Epoch 13/20
4/4 [==============================] - 9s 2s/step - loss: 0.6213 - accuracy: 0.7646 - val_loss: 0.5982 - val_accuracy: 0.7812

Epoch 00013: val_accuracy did not improve from 0.78125
Epoch 14/20
4/4 [==============================] - 9s 2s/step - loss: 0.5465 - accuracy: 0.8208 - val_loss: 0.6689 - val_accuracy: 0.7656

Epoch 00014: val_accuracy did not improve from 0.78125
Epoch 15/20
4/4 [==============================] - 9s 2s/step - loss: 0.5333 - accuracy: 0.8151 - val_loss: 0.7355 - val_accuracy: 0.8125

Epoch 00015: val_accuracy improved from 0.78125 to 0.81250, saving model to models\best_model_esc10_exp_0_19
Epoch 16/20
4/4 [==============================] - 9s 2s/step - loss: 0.5108 - accuracy: 0.8203 - val_loss: 0.6529 - val_accuracy: 0.7812

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
4/4 [==============================] - 9s 2s/step - loss: 0.4313 - accuracy: 0.8490 - val_loss: 0.6358 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
4/4 [==============================] - 9s 2s/step - loss: 0.4102 - accuracy: 0.8370 - val_loss: 0.5222 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
4/4 [==============================] - 9s 2s/step - loss: 0.3373 - accuracy: 0.8693 - val_loss: 0.6416 - val_accuracy: 0.8125

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
4/4 [==============================] - 9s 2s/step - loss: 0.3113 - accuracy: 0.8812 - val_loss: 0.6001 - val_accuracy: 0.8281

Epoch 00020: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_0_19
Test accuracy:  0.7875000238418579
{'n_augmentation_per_train': 1, 'p_per_augmentation': 0.5}
100%|██████████| 256/256 [01:04<00:00,  3.99it/s]
Shape after augmentation:  (512, 128, 431, 1) (512, 10) (64, 128, 431, 1) (80, 128, 431, 1)
{'n_filters_l1': 64, 'n_filters_l2': 32, 'n_filters_l3': 32, 'n_dense_layer': 150, 'batch_size': 64, 'epochs': 20}
Epoch 1/20
8/8 [==============================] - 14s 2s/step - loss: 2.3020 - accuracy: 0.1274 - val_loss: 1.9278 - val_accuracy: 0.2188

Epoch 00001: val_accuracy improved from -inf to 0.21875, saving model to models\best_model_esc10_exp_1_0
Epoch 2/20
8/8 [==============================] - 15s 2s/step - loss: 2.0079 - accuracy: 0.2181 - val_loss: 1.7209 - val_accuracy: 0.3906

Epoch 00002: val_accuracy improved from 0.21875 to 0.39062, saving model to models\best_model_esc10_exp_1_0
Epoch 3/20
8/8 [==============================] - 17s 2s/step - loss: 1.8098 - accuracy: 0.3314 - val_loss: 1.5862 - val_accuracy: 0.4375

Epoch 00003: val_accuracy improved from 0.39062 to 0.43750, saving model to models\best_model_esc10_exp_1_0
Epoch 4/20
8/8 [==============================] - 17s 2s/step - loss: 1.6334 - accuracy: 0.4096 - val_loss: 1.1215 - val_accuracy: 0.6719

Epoch 00004: val_accuracy improved from 0.43750 to 0.67188, saving model to models\best_model_esc10_exp_1_0
Epoch 5/20
8/8 [==============================] - 17s 2s/step - loss: 1.3616 - accuracy: 0.4995 - val_loss: 1.0690 - val_accuracy: 0.6719

Epoch 00005: val_accuracy did not improve from 0.67188
Epoch 6/20
8/8 [==============================] - 17s 2s/step - loss: 1.2029 - accuracy: 0.5804 - val_loss: 0.9828 - val_accuracy: 0.6875

Epoch 00006: val_accuracy improved from 0.67188 to 0.68750, saving model to models\best_model_esc10_exp_1_0
Epoch 7/20
8/8 [==============================] - 17s 2s/step - loss: 1.0819 - accuracy: 0.5682 - val_loss: 0.8638 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.68750 to 0.70312, saving model to models\best_model_esc10_exp_1_0
Epoch 8/20
8/8 [==============================] - 17s 2s/step - loss: 0.8493 - accuracy: 0.6692 - val_loss: 0.8397 - val_accuracy: 0.7500

Epoch 00008: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_1_0
Epoch 9/20
8/8 [==============================] - 17s 2s/step - loss: 0.8462 - accuracy: 0.6827 - val_loss: 0.6970 - val_accuracy: 0.7500

Epoch 00009: val_accuracy did not improve from 0.75000
Epoch 10/20
8/8 [==============================] - 17s 2s/step - loss: 0.7646 - accuracy: 0.7197 - val_loss: 0.7821 - val_accuracy: 0.7031

Epoch 00010: val_accuracy did not improve from 0.75000
Epoch 11/20
8/8 [==============================] - 16s 2s/step - loss: 0.6826 - accuracy: 0.7706 - val_loss: 0.6062 - val_accuracy: 0.7812

Epoch 00011: val_accuracy improved from 0.75000 to 0.78125, saving model to models\best_model_esc10_exp_1_0
Epoch 12/20
8/8 [==============================] - 15s 2s/step - loss: 0.6256 - accuracy: 0.7646 - val_loss: 0.5802 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.78125
Epoch 13/20
8/8 [==============================] - 15s 2s/step - loss: 0.6011 - accuracy: 0.7985 - val_loss: 0.6635 - val_accuracy: 0.7812

Epoch 00013: val_accuracy did not improve from 0.78125
Epoch 14/20
8/8 [==============================] - 17s 2s/step - loss: 0.5239 - accuracy: 0.8024 - val_loss: 0.5208 - val_accuracy: 0.8281

Epoch 00014: val_accuracy improved from 0.78125 to 0.82812, saving model to models\best_model_esc10_exp_1_0
Epoch 15/20
8/8 [==============================] - 17s 2s/step - loss: 0.4427 - accuracy: 0.8342 - val_loss: 0.6979 - val_accuracy: 0.7656

Epoch 00015: val_accuracy did not improve from 0.82812
Epoch 16/20
8/8 [==============================] - 17s 2s/step - loss: 0.4684 - accuracy: 0.8437 - val_loss: 0.7800 - val_accuracy: 0.7812

Epoch 00016: val_accuracy did not improve from 0.82812
Epoch 17/20
8/8 [==============================] - 17s 2s/step - loss: 0.4989 - accuracy: 0.8224 - val_loss: 0.5965 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.82812
Epoch 18/20
8/8 [==============================] - 17s 2s/step - loss: 0.3065 - accuracy: 0.9062 - val_loss: 0.6573 - val_accuracy: 0.8281

Epoch 00018: val_accuracy did not improve from 0.82812
Epoch 19/20
8/8 [==============================] - 17s 2s/step - loss: 0.2892 - accuracy: 0.8930 - val_loss: 0.5864 - val_accuracy: 0.8125

Epoch 00019: val_accuracy did not improve from 0.82812
Epoch 20/20
8/8 [==============================] - 16s 2s/step - loss: 0.2704 - accuracy: 0.9043 - val_loss: 0.7028 - val_accuracy: 0.8281

Epoch 00020: val_accuracy did not improve from 0.82812
Test accuracy:  0.800000011920929
Epoch 1/20
8/8 [==============================] - 16s 2s/step - loss: 2.3338 - accuracy: 0.1392 - val_loss: 1.7825 - val_accuracy: 0.3594

Epoch 00001: val_accuracy improved from -inf to 0.35938, saving model to models\best_model_esc10_exp_1_1
Epoch 2/20
8/8 [==============================] - 15s 2s/step - loss: 1.9967 - accuracy: 0.2466 - val_loss: 1.6090 - val_accuracy: 0.5312

Epoch 00002: val_accuracy improved from 0.35938 to 0.53125, saving model to models\best_model_esc10_exp_1_1
Epoch 3/20
8/8 [==============================] - 16s 2s/step - loss: 1.7387 - accuracy: 0.3561 - val_loss: 1.3355 - val_accuracy: 0.4844

Epoch 00003: val_accuracy did not improve from 0.53125
Epoch 4/20
8/8 [==============================] - 17s 2s/step - loss: 1.6052 - accuracy: 0.3600 - val_loss: 1.2177 - val_accuracy: 0.6094

Epoch 00004: val_accuracy improved from 0.53125 to 0.60938, saving model to models\best_model_esc10_exp_1_1
Epoch 5/20
8/8 [==============================] - 17s 2s/step - loss: 1.4416 - accuracy: 0.4817 - val_loss: 0.9319 - val_accuracy: 0.7031

Epoch 00005: val_accuracy improved from 0.60938 to 0.70312, saving model to models\best_model_esc10_exp_1_1
Epoch 6/20
8/8 [==============================] - 18s 2s/step - loss: 1.2270 - accuracy: 0.5260 - val_loss: 0.8664 - val_accuracy: 0.6719

Epoch 00006: val_accuracy did not improve from 0.70312
Epoch 7/20
8/8 [==============================] - 17s 2s/step - loss: 1.1071 - accuracy: 0.5985 - val_loss: 0.7176 - val_accuracy: 0.7656

Epoch 00007: val_accuracy improved from 0.70312 to 0.76562, saving model to models\best_model_esc10_exp_1_1
Epoch 8/20
8/8 [==============================] - 17s 2s/step - loss: 1.0003 - accuracy: 0.6470 - val_loss: 0.7681 - val_accuracy: 0.7812

Epoch 00008: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_1_1
Epoch 9/20
8/8 [==============================] - 17s 2s/step - loss: 0.9083 - accuracy: 0.6547 - val_loss: 0.6979 - val_accuracy: 0.7500

Epoch 00009: val_accuracy did not improve from 0.78125
Epoch 10/20
8/8 [==============================] - 16s 2s/step - loss: 0.6984 - accuracy: 0.7678 - val_loss: 0.6562 - val_accuracy: 0.7500

Epoch 00010: val_accuracy did not improve from 0.78125
Epoch 11/20
8/8 [==============================] - 16s 2s/step - loss: 0.7329 - accuracy: 0.7173 - val_loss: 0.6548 - val_accuracy: 0.7656

Epoch 00011: val_accuracy did not improve from 0.78125
Epoch 12/20
8/8 [==============================] - 17s 2s/step - loss: 0.6399 - accuracy: 0.7600 - val_loss: 0.7968 - val_accuracy: 0.7656

Epoch 00012: val_accuracy did not improve from 0.78125
Epoch 13/20
8/8 [==============================] - 17s 2s/step - loss: 0.5664 - accuracy: 0.7947 - val_loss: 0.5175 - val_accuracy: 0.8438

Epoch 00013: val_accuracy improved from 0.78125 to 0.84375, saving model to models\best_model_esc10_exp_1_1
Epoch 14/20
8/8 [==============================] - 18s 2s/step - loss: 0.5946 - accuracy: 0.7782 - val_loss: 0.6436 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.84375
Epoch 15/20
8/8 [==============================] - 17s 2s/step - loss: 0.5282 - accuracy: 0.8335 - val_loss: 0.4794 - val_accuracy: 0.8281

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
8/8 [==============================] - 17s 2s/step - loss: 0.4756 - accuracy: 0.8419 - val_loss: 0.5397 - val_accuracy: 0.8281

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
8/8 [==============================] - 16s 2s/step - loss: 0.3984 - accuracy: 0.8709 - val_loss: 0.5902 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
8/8 [==============================] - 15s 2s/step - loss: 0.3836 - accuracy: 0.8813 - val_loss: 0.4651 - val_accuracy: 0.8438

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
8/8 [==============================] - 15s 2s/step - loss: 0.3563 - accuracy: 0.8593 - val_loss: 0.4765 - val_accuracy: 0.8594

Epoch 00019: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_1_1
Epoch 20/20
8/8 [==============================] - 17s 2s/step - loss: 0.3519 - accuracy: 0.8690 - val_loss: 0.6033 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.85938
Test accuracy:  0.862500011920929
Epoch 1/20
8/8 [==============================] - 18s 2s/step - loss: 2.2755 - accuracy: 0.1378 - val_loss: 1.8930 - val_accuracy: 0.2969

Epoch 00001: val_accuracy improved from -inf to 0.29688, saving model to models\best_model_esc10_exp_1_2
Epoch 2/20
8/8 [==============================] - 17s 2s/step - loss: 2.0740 - accuracy: 0.2101 - val_loss: 1.6555 - val_accuracy: 0.3125

Epoch 00002: val_accuracy improved from 0.29688 to 0.31250, saving model to models\best_model_esc10_exp_1_2
Epoch 3/20
8/8 [==============================] - 17s 2s/step - loss: 1.8763 - accuracy: 0.2744 - val_loss: 1.4661 - val_accuracy: 0.6094

Epoch 00003: val_accuracy improved from 0.31250 to 0.60938, saving model to models\best_model_esc10_exp_1_2
Epoch 4/20
8/8 [==============================] - 17s 2s/step - loss: 1.7197 - accuracy: 0.3577 - val_loss: 1.4248 - val_accuracy: 0.5156

Epoch 00004: val_accuracy did not improve from 0.60938
Epoch 5/20
8/8 [==============================] - 17s 2s/step - loss: 1.5459 - accuracy: 0.4625 - val_loss: 1.1573 - val_accuracy: 0.5938

Epoch 00005: val_accuracy did not improve from 0.60938
Epoch 6/20
8/8 [==============================] - 16s 2s/step - loss: 1.4668 - accuracy: 0.4590 - val_loss: 1.1025 - val_accuracy: 0.6094

Epoch 00006: val_accuracy did not improve from 0.60938
Epoch 7/20
8/8 [==============================] - 16s 2s/step - loss: 1.2758 - accuracy: 0.5368 - val_loss: 0.8581 - val_accuracy: 0.6875

Epoch 00007: val_accuracy improved from 0.60938 to 0.68750, saving model to models\best_model_esc10_exp_1_2
Epoch 8/20
8/8 [==============================] - 15s 2s/step - loss: 1.0964 - accuracy: 0.6026 - val_loss: 0.8434 - val_accuracy: 0.7188

Epoch 00008: val_accuracy improved from 0.68750 to 0.71875, saving model to models\best_model_esc10_exp_1_2
Epoch 9/20
8/8 [==============================] - 16s 2s/step - loss: 1.0717 - accuracy: 0.6288 - val_loss: 0.7665 - val_accuracy: 0.7500

Epoch 00009: val_accuracy improved from 0.71875 to 0.75000, saving model to models\best_model_esc10_exp_1_2
Epoch 10/20
8/8 [==============================] - 18s 2s/step - loss: 0.8092 - accuracy: 0.7037 - val_loss: 0.7499 - val_accuracy: 0.7031

Epoch 00010: val_accuracy did not improve from 0.75000
Epoch 11/20
8/8 [==============================] - 17s 2s/step - loss: 0.8175 - accuracy: 0.7088 - val_loss: 0.7616 - val_accuracy: 0.7656

Epoch 00011: val_accuracy improved from 0.75000 to 0.76562, saving model to models\best_model_esc10_exp_1_2
Epoch 12/20
8/8 [==============================] - 17s 2s/step - loss: 0.7985 - accuracy: 0.7144 - val_loss: 0.6381 - val_accuracy: 0.8281

Epoch 00012: val_accuracy improved from 0.76562 to 0.82812, saving model to models\best_model_esc10_exp_1_2
Epoch 13/20
8/8 [==============================] - 17s 2s/step - loss: 0.7198 - accuracy: 0.7523 - val_loss: 0.5150 - val_accuracy: 0.8438

Epoch 00013: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_1_2
Epoch 14/20
8/8 [==============================] - 17s 2s/step - loss: 0.6623 - accuracy: 0.7366 - val_loss: 0.6876 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.84375
Epoch 15/20
8/8 [==============================] - 17s 2s/step - loss: 0.5946 - accuracy: 0.7424 - val_loss: 0.5716 - val_accuracy: 0.8125

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
8/8 [==============================] - 16s 2s/step - loss: 0.5307 - accuracy: 0.8167 - val_loss: 0.6781 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
8/8 [==============================] - 16s 2s/step - loss: 0.4776 - accuracy: 0.8218 - val_loss: 0.5175 - val_accuracy: 0.8281

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
8/8 [==============================] - 17s 2s/step - loss: 0.3739 - accuracy: 0.8839 - val_loss: 0.6172 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
8/8 [==============================] - 17s 2s/step - loss: 0.3918 - accuracy: 0.8506 - val_loss: 0.5314 - val_accuracy: 0.8281

Epoch 00019: val_accuracy did not improve from 0.84375
Epoch 20/20
8/8 [==============================] - 17s 2s/step - loss: 0.3321 - accuracy: 0.8717 - val_loss: 0.4850 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.84375
Test accuracy:  0.8374999761581421
Epoch 1/20
8/8 [==============================] - 15s 2s/step - loss: 2.2630 - accuracy: 0.1324 - val_loss: 1.9734 - val_accuracy: 0.1875

Epoch 00001: val_accuracy improved from -inf to 0.18750, saving model to models\best_model_esc10_exp_1_3
Epoch 2/20
8/8 [==============================] - 14s 2s/step - loss: 2.0861 - accuracy: 0.2060 - val_loss: 1.6642 - val_accuracy: 0.3281

Epoch 00002: val_accuracy improved from 0.18750 to 0.32812, saving model to models\best_model_esc10_exp_1_3
Epoch 3/20
8/8 [==============================] - 14s 2s/step - loss: 1.7622 - accuracy: 0.3114 - val_loss: 1.4231 - val_accuracy: 0.5312

Epoch 00003: val_accuracy improved from 0.32812 to 0.53125, saving model to models\best_model_esc10_exp_1_3
Epoch 4/20
8/8 [==============================] - 14s 2s/step - loss: 1.6341 - accuracy: 0.3804 - val_loss: 1.3015 - val_accuracy: 0.5156

Epoch 00004: val_accuracy did not improve from 0.53125
Epoch 5/20
8/8 [==============================] - 15s 2s/step - loss: 1.4399 - accuracy: 0.4757 - val_loss: 0.9688 - val_accuracy: 0.6406

Epoch 00005: val_accuracy improved from 0.53125 to 0.64062, saving model to models\best_model_esc10_exp_1_3
Epoch 6/20
8/8 [==============================] - 15s 2s/step - loss: 1.1886 - accuracy: 0.5808 - val_loss: 0.9042 - val_accuracy: 0.6562

Epoch 00006: val_accuracy improved from 0.64062 to 0.65625, saving model to models\best_model_esc10_exp_1_3
Epoch 7/20
8/8 [==============================] - 15s 2s/step - loss: 1.1459 - accuracy: 0.5940 - val_loss: 0.7584 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.65625 to 0.70312, saving model to models\best_model_esc10_exp_1_3
Epoch 8/20
8/8 [==============================] - 15s 2s/step - loss: 0.9266 - accuracy: 0.6705 - val_loss: 0.6512 - val_accuracy: 0.7969

Epoch 00008: val_accuracy improved from 0.70312 to 0.79688, saving model to models\best_model_esc10_exp_1_3
Epoch 9/20
8/8 [==============================] - 15s 2s/step - loss: 0.8537 - accuracy: 0.6958 - val_loss: 0.6269 - val_accuracy: 0.7656

Epoch 00009: val_accuracy did not improve from 0.79688
Epoch 10/20
8/8 [==============================] - 15s 2s/step - loss: 0.7483 - accuracy: 0.7386 - val_loss: 0.7476 - val_accuracy: 0.7656

Epoch 00010: val_accuracy did not improve from 0.79688
Epoch 11/20
8/8 [==============================] - 14s 2s/step - loss: 0.7718 - accuracy: 0.7149 - val_loss: 0.5230 - val_accuracy: 0.7812

Epoch 00011: val_accuracy did not improve from 0.79688
Epoch 12/20
8/8 [==============================] - 14s 2s/step - loss: 0.6456 - accuracy: 0.7669 - val_loss: 0.6272 - val_accuracy: 0.7969

Epoch 00012: val_accuracy did not improve from 0.79688
Epoch 13/20
8/8 [==============================] - 13s 2s/step - loss: 0.6821 - accuracy: 0.7628 - val_loss: 0.5652 - val_accuracy: 0.8125

Epoch 00013: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_1_3
Epoch 14/20
8/8 [==============================] - 15s 2s/step - loss: 0.6195 - accuracy: 0.7586 - val_loss: 0.4932 - val_accuracy: 0.8438

Epoch 00014: val_accuracy improved from 0.81250 to 0.84375, saving model to models\best_model_esc10_exp_1_3
Epoch 15/20
8/8 [==============================] - 15s 2s/step - loss: 0.5554 - accuracy: 0.7788 - val_loss: 0.6415 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
8/8 [==============================] - 15s 2s/step - loss: 0.4678 - accuracy: 0.8235 - val_loss: 0.5815 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
8/8 [==============================] - 15s 2s/step - loss: 0.4310 - accuracy: 0.8445 - val_loss: 0.6869 - val_accuracy: 0.7969

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
8/8 [==============================] - 15s 2s/step - loss: 0.4316 - accuracy: 0.8513 - val_loss: 0.5973 - val_accuracy: 0.8438

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
8/8 [==============================] - 15s 2s/step - loss: 0.3819 - accuracy: 0.8735 - val_loss: 0.6422 - val_accuracy: 0.7969

Epoch 00019: val_accuracy did not improve from 0.84375
Epoch 20/20
8/8 [==============================] - 15s 2s/step - loss: 0.4183 - accuracy: 0.8466 - val_loss: 0.5581 - val_accuracy: 0.8594

Epoch 00020: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_1_3
Test accuracy:  0.8374999761581421
Epoch 1/20
8/8 [==============================] - 14s 2s/step - loss: 2.2821 - accuracy: 0.1145 - val_loss: 1.9449 - val_accuracy: 0.2500

Epoch 00001: val_accuracy improved from -inf to 0.25000, saving model to models\best_model_esc10_exp_1_4
Epoch 2/20
8/8 [==============================] - 14s 2s/step - loss: 2.0082 - accuracy: 0.2504 - val_loss: 1.6001 - val_accuracy: 0.3594

Epoch 00002: val_accuracy improved from 0.25000 to 0.35938, saving model to models\best_model_esc10_exp_1_4
Epoch 3/20
8/8 [==============================] - 14s 2s/step - loss: 1.8273 - accuracy: 0.3176 - val_loss: 1.4215 - val_accuracy: 0.4531

Epoch 00003: val_accuracy improved from 0.35938 to 0.45312, saving model to models\best_model_esc10_exp_1_4
Epoch 4/20
8/8 [==============================] - 14s 2s/step - loss: 1.5860 - accuracy: 0.4426 - val_loss: 1.1371 - val_accuracy: 0.6719

Epoch 00004: val_accuracy improved from 0.45312 to 0.67188, saving model to models\best_model_esc10_exp_1_4
Epoch 5/20
8/8 [==============================] - 13s 2s/step - loss: 1.3427 - accuracy: 0.5086 - val_loss: 0.9427 - val_accuracy: 0.6406

Epoch 00005: val_accuracy did not improve from 0.67188
Epoch 6/20
8/8 [==============================] - 13s 2s/step - loss: 1.2245 - accuracy: 0.5718 - val_loss: 0.9605 - val_accuracy: 0.5469

Epoch 00006: val_accuracy did not improve from 0.67188
Epoch 7/20
8/8 [==============================] - 15s 2s/step - loss: 1.0783 - accuracy: 0.6193 - val_loss: 0.7313 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.67188 to 0.70312, saving model to models\best_model_esc10_exp_1_4
Epoch 8/20
8/8 [==============================] - 15s 2s/step - loss: 0.9509 - accuracy: 0.6403 - val_loss: 0.6745 - val_accuracy: 0.7500

Epoch 00008: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_1_4
Epoch 9/20
8/8 [==============================] - 15s 2s/step - loss: 0.8303 - accuracy: 0.6881 - val_loss: 0.5515 - val_accuracy: 0.8438

Epoch 00009: val_accuracy improved from 0.75000 to 0.84375, saving model to models\best_model_esc10_exp_1_4
Epoch 10/20
8/8 [==============================] - 15s 2s/step - loss: 0.8012 - accuracy: 0.7178 - val_loss: 0.6740 - val_accuracy: 0.7188

Epoch 00010: val_accuracy did not improve from 0.84375
Epoch 11/20
8/8 [==============================] - 15s 2s/step - loss: 0.7560 - accuracy: 0.7425 - val_loss: 0.5535 - val_accuracy: 0.8125

Epoch 00011: val_accuracy did not improve from 0.84375
Epoch 12/20
8/8 [==============================] - 14s 2s/step - loss: 0.6504 - accuracy: 0.7847 - val_loss: 0.6857 - val_accuracy: 0.7969

Epoch 00012: val_accuracy did not improve from 0.84375
Epoch 13/20
8/8 [==============================] - 14s 2s/step - loss: 0.5644 - accuracy: 0.7916 - val_loss: 0.5806 - val_accuracy: 0.7812

Epoch 00013: val_accuracy did not improve from 0.84375
Epoch 14/20
8/8 [==============================] - 15s 2s/step - loss: 0.5282 - accuracy: 0.8030 - val_loss: 0.6291 - val_accuracy: 0.8438

Epoch 00014: val_accuracy did not improve from 0.84375
Epoch 15/20
8/8 [==============================] - 14s 2s/step - loss: 0.5051 - accuracy: 0.8398 - val_loss: 0.5632 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
8/8 [==============================] - 13s 2s/step - loss: 0.3924 - accuracy: 0.8491 - val_loss: 0.6411 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
8/8 [==============================] - 13s 2s/step - loss: 0.3724 - accuracy: 0.8614 - val_loss: 0.7260 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
8/8 [==============================] - 14s 2s/step - loss: 0.4515 - accuracy: 0.8553 - val_loss: 0.4273 - val_accuracy: 0.8281

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
8/8 [==============================] - 15s 2s/step - loss: 0.4155 - accuracy: 0.8332 - val_loss: 0.6712 - val_accuracy: 0.7969

Epoch 00019: val_accuracy did not improve from 0.84375
Epoch 20/20
8/8 [==============================] - 14s 2s/step - loss: 0.3720 - accuracy: 0.8698 - val_loss: 0.5554 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.84375
Test accuracy:  0.762499988079071
Epoch 1/20
8/8 [==============================] - 15s 2s/step - loss: 2.2856 - accuracy: 0.0981 - val_loss: 1.9512 - val_accuracy: 0.2031

Epoch 00001: val_accuracy improved from -inf to 0.20312, saving model to models\best_model_esc10_exp_1_5
Epoch 2/20
8/8 [==============================] - 15s 2s/step - loss: 2.0495 - accuracy: 0.2191 - val_loss: 1.7176 - val_accuracy: 0.2969

Epoch 00002: val_accuracy improved from 0.20312 to 0.29688, saving model to models\best_model_esc10_exp_1_5
Epoch 3/20
8/8 [==============================] - 15s 2s/step - loss: 1.8313 - accuracy: 0.3102 - val_loss: 1.5150 - val_accuracy: 0.4375

Epoch 00003: val_accuracy improved from 0.29688 to 0.43750, saving model to models\best_model_esc10_exp_1_5
Epoch 4/20
8/8 [==============================] - 15s 2s/step - loss: 1.6576 - accuracy: 0.3579 - val_loss: 1.3769 - val_accuracy: 0.4688

Epoch 00004: val_accuracy improved from 0.43750 to 0.46875, saving model to models\best_model_esc10_exp_1_5
Epoch 5/20
8/8 [==============================] - 14s 2s/step - loss: 1.4706 - accuracy: 0.4708 - val_loss: 1.1623 - val_accuracy: 0.5625

Epoch 00005: val_accuracy improved from 0.46875 to 0.56250, saving model to models\best_model_esc10_exp_1_5
Epoch 6/20
8/8 [==============================] - 15s 2s/step - loss: 1.3695 - accuracy: 0.4835 - val_loss: 1.0585 - val_accuracy: 0.6406

Epoch 00006: val_accuracy improved from 0.56250 to 0.64062, saving model to models\best_model_esc10_exp_1_5
Epoch 7/20
8/8 [==============================] - 14s 2s/step - loss: 1.1407 - accuracy: 0.6025 - val_loss: 0.8035 - val_accuracy: 0.6719

Epoch 00007: val_accuracy improved from 0.64062 to 0.67188, saving model to models\best_model_esc10_exp_1_5
Epoch 8/20
8/8 [==============================] - 13s 2s/step - loss: 1.0747 - accuracy: 0.6535 - val_loss: 0.8132 - val_accuracy: 0.7188

Epoch 00008: val_accuracy improved from 0.67188 to 0.71875, saving model to models\best_model_esc10_exp_1_5
Epoch 9/20
8/8 [==============================] - 13s 2s/step - loss: 0.9234 - accuracy: 0.6656 - val_loss: 0.7714 - val_accuracy: 0.7344

Epoch 00009: val_accuracy improved from 0.71875 to 0.73438, saving model to models\best_model_esc10_exp_1_5
Epoch 10/20
8/8 [==============================] - 14s 2s/step - loss: 0.8160 - accuracy: 0.7073 - val_loss: 0.7011 - val_accuracy: 0.7344

Epoch 00010: val_accuracy did not improve from 0.73438
Epoch 11/20
8/8 [==============================] - 15s 2s/step - loss: 0.8219 - accuracy: 0.7038 - val_loss: 0.6592 - val_accuracy: 0.8281

Epoch 00011: val_accuracy improved from 0.73438 to 0.82812, saving model to models\best_model_esc10_exp_1_5
Epoch 12/20
8/8 [==============================] - 15s 2s/step - loss: 0.7035 - accuracy: 0.7459 - val_loss: 0.4315 - val_accuracy: 0.8906

Epoch 00012: val_accuracy improved from 0.82812 to 0.89062, saving model to models\best_model_esc10_exp_1_5
Epoch 13/20
8/8 [==============================] - 15s 2s/step - loss: 0.5677 - accuracy: 0.8156 - val_loss: 0.6412 - val_accuracy: 0.7500

Epoch 00013: val_accuracy did not improve from 0.89062
Epoch 14/20
8/8 [==============================] - 15s 2s/step - loss: 0.5359 - accuracy: 0.7932 - val_loss: 0.5457 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.89062
Epoch 15/20
8/8 [==============================] - 15s 2s/step - loss: 0.4686 - accuracy: 0.8160 - val_loss: 0.5305 - val_accuracy: 0.8438

Epoch 00015: val_accuracy did not improve from 0.89062
Epoch 16/20
8/8 [==============================] - 14s 2s/step - loss: 0.4525 - accuracy: 0.8202 - val_loss: 0.6360 - val_accuracy: 0.7812

Epoch 00016: val_accuracy did not improve from 0.89062
Epoch 17/20
8/8 [==============================] - 14s 2s/step - loss: 0.4969 - accuracy: 0.8242 - val_loss: 0.5846 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.89062
Epoch 18/20
8/8 [==============================] - 15s 2s/step - loss: 0.4792 - accuracy: 0.8102 - val_loss: 0.4595 - val_accuracy: 0.8750

Epoch 00018: val_accuracy did not improve from 0.89062
Epoch 19/20
8/8 [==============================] - 15s 2s/step - loss: 0.3435 - accuracy: 0.8755 - val_loss: 0.5125 - val_accuracy: 0.8281

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
8/8 [==============================] - 14s 2s/step - loss: 0.3223 - accuracy: 0.8796 - val_loss: 0.5898 - val_accuracy: 0.8281

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.824999988079071
Epoch 1/20
8/8 [==============================] - 14s 2s/step - loss: 2.2935 - accuracy: 0.1179 - val_loss: 1.9443 - val_accuracy: 0.3125

Epoch 00001: val_accuracy improved from -inf to 0.31250, saving model to models\best_model_esc10_exp_1_6
Epoch 2/20
8/8 [==============================] - 14s 2s/step - loss: 2.0228 - accuracy: 0.2363 - val_loss: 1.6883 - val_accuracy: 0.4688

Epoch 00002: val_accuracy improved from 0.31250 to 0.46875, saving model to models\best_model_esc10_exp_1_6
Epoch 3/20
8/8 [==============================] - 14s 2s/step - loss: 1.8289 - accuracy: 0.3225 - val_loss: 1.4190 - val_accuracy: 0.5156

Epoch 00003: val_accuracy improved from 0.46875 to 0.51562, saving model to models\best_model_esc10_exp_1_6
Epoch 4/20
8/8 [==============================] - 15s 2s/step - loss: 1.6862 - accuracy: 0.3585 - val_loss: 1.2537 - val_accuracy: 0.5469

Epoch 00004: val_accuracy improved from 0.51562 to 0.54688, saving model to models\best_model_esc10_exp_1_6
Epoch 5/20
8/8 [==============================] - 15s 2s/step - loss: 1.5330 - accuracy: 0.4388 - val_loss: 1.0501 - val_accuracy: 0.6406

Epoch 00005: val_accuracy improved from 0.54688 to 0.64062, saving model to models\best_model_esc10_exp_1_6
Epoch 6/20
8/8 [==============================] - 15s 2s/step - loss: 1.3828 - accuracy: 0.5286 - val_loss: 0.9082 - val_accuracy: 0.7031

Epoch 00006: val_accuracy improved from 0.64062 to 0.70312, saving model to models\best_model_esc10_exp_1_6
Epoch 7/20
8/8 [==============================] - 15s 2s/step - loss: 1.1682 - accuracy: 0.5465 - val_loss: 0.9589 - val_accuracy: 0.6406

Epoch 00007: val_accuracy did not improve from 0.70312
Epoch 8/20
8/8 [==============================] - 15s 2s/step - loss: 1.0617 - accuracy: 0.5940 - val_loss: 0.7971 - val_accuracy: 0.7500

Epoch 00008: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_1_6
Epoch 9/20
8/8 [==============================] - 16s 2s/step - loss: 0.9465 - accuracy: 0.6301 - val_loss: 0.7918 - val_accuracy: 0.7500

Epoch 00009: val_accuracy did not improve from 0.75000
Epoch 10/20
8/8 [==============================] - 15s 2s/step - loss: 0.8064 - accuracy: 0.6880 - val_loss: 0.6618 - val_accuracy: 0.8125

Epoch 00010: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_1_6
Epoch 11/20
8/8 [==============================] - 14s 2s/step - loss: 0.8269 - accuracy: 0.7068 - val_loss: 0.6420 - val_accuracy: 0.8125

Epoch 00011: val_accuracy did not improve from 0.81250
Epoch 12/20
8/8 [==============================] - 13s 2s/step - loss: 0.7046 - accuracy: 0.7528 - val_loss: 0.5399 - val_accuracy: 0.8281

Epoch 00012: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_1_6
Epoch 13/20
8/8 [==============================] - 14s 2s/step - loss: 0.6260 - accuracy: 0.7649 - val_loss: 0.5782 - val_accuracy: 0.8125

Epoch 00013: val_accuracy did not improve from 0.82812
Epoch 14/20
8/8 [==============================] - 15s 2s/step - loss: 0.5294 - accuracy: 0.8068 - val_loss: 0.6434 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.82812
Epoch 15/20
8/8 [==============================] - 15s 2s/step - loss: 0.6037 - accuracy: 0.7716 - val_loss: 0.5328 - val_accuracy: 0.8750

Epoch 00015: val_accuracy improved from 0.82812 to 0.87500, saving model to models\best_model_esc10_exp_1_6
Epoch 16/20
8/8 [==============================] - 15s 2s/step - loss: 0.5528 - accuracy: 0.7830 - val_loss: 0.5135 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.87500
Epoch 17/20
8/8 [==============================] - 15s 2s/step - loss: 0.4143 - accuracy: 0.8252 - val_loss: 0.4956 - val_accuracy: 0.8594

Epoch 00017: val_accuracy did not improve from 0.87500
Epoch 18/20
8/8 [==============================] - 15s 2s/step - loss: 0.4437 - accuracy: 0.8409 - val_loss: 0.5815 - val_accuracy: 0.8438

Epoch 00018: val_accuracy did not improve from 0.87500
Epoch 19/20
8/8 [==============================] - 15s 2s/step - loss: 0.3808 - accuracy: 0.8549 - val_loss: 0.4843 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.87500
Epoch 20/20
8/8 [==============================] - 14s 2s/step - loss: 0.3799 - accuracy: 0.8539 - val_loss: 0.4554 - val_accuracy: 0.8438

Epoch 00020: val_accuracy did not improve from 0.87500
Test accuracy:  0.800000011920929
Epoch 1/20
8/8 [==============================] - 15s 2s/step - loss: 2.2972 - accuracy: 0.1845 - val_loss: 1.9143 - val_accuracy: 0.2188

Epoch 00001: val_accuracy improved from -inf to 0.21875, saving model to models\best_model_esc10_exp_1_7
Epoch 2/20
8/8 [==============================] - 14s 2s/step - loss: 1.9889 - accuracy: 0.2586 - val_loss: 1.6890 - val_accuracy: 0.3750

Epoch 00002: val_accuracy improved from 0.21875 to 0.37500, saving model to models\best_model_esc10_exp_1_7
Epoch 3/20
8/8 [==============================] - 16s 2s/step - loss: 1.7175 - accuracy: 0.3385 - val_loss: 1.3927 - val_accuracy: 0.4844

Epoch 00003: val_accuracy improved from 0.37500 to 0.48438, saving model to models\best_model_esc10_exp_1_7
Epoch 4/20
8/8 [==============================] - 16s 2s/step - loss: 1.6848 - accuracy: 0.3939 - val_loss: 1.3420 - val_accuracy: 0.5469

Epoch 00004: val_accuracy improved from 0.48438 to 0.54688, saving model to models\best_model_esc10_exp_1_7
Epoch 5/20
8/8 [==============================] - 16s 2s/step - loss: 1.4157 - accuracy: 0.4953 - val_loss: 0.9602 - val_accuracy: 0.6875

Epoch 00005: val_accuracy improved from 0.54688 to 0.68750, saving model to models\best_model_esc10_exp_1_7
Epoch 6/20
8/8 [==============================] - 15s 2s/step - loss: 1.1515 - accuracy: 0.5794 - val_loss: 0.9220 - val_accuracy: 0.6562

Epoch 00006: val_accuracy did not improve from 0.68750
Epoch 7/20
8/8 [==============================] - 15s 2s/step - loss: 1.0126 - accuracy: 0.6321 - val_loss: 0.8612 - val_accuracy: 0.7188

Epoch 00007: val_accuracy improved from 0.68750 to 0.71875, saving model to models\best_model_esc10_exp_1_7
Epoch 8/20
8/8 [==============================] - 16s 2s/step - loss: 0.8656 - accuracy: 0.6604 - val_loss: 0.7525 - val_accuracy: 0.6719

Epoch 00008: val_accuracy did not improve from 0.71875
Epoch 9/20
8/8 [==============================] - 16s 2s/step - loss: 0.8313 - accuracy: 0.6953 - val_loss: 0.7494 - val_accuracy: 0.6719

Epoch 00009: val_accuracy did not improve from 0.71875
Epoch 10/20
8/8 [==============================] - 16s 2s/step - loss: 0.7119 - accuracy: 0.7651 - val_loss: 0.7882 - val_accuracy: 0.6562

Epoch 00010: val_accuracy did not improve from 0.71875
Epoch 11/20
8/8 [==============================] - 15s 2s/step - loss: 0.7408 - accuracy: 0.7249 - val_loss: 0.7692 - val_accuracy: 0.7344

Epoch 00011: val_accuracy improved from 0.71875 to 0.73438, saving model to models\best_model_esc10_exp_1_7
Epoch 12/20
8/8 [==============================] - 15s 2s/step - loss: 0.5894 - accuracy: 0.7951 - val_loss: 0.8000 - val_accuracy: 0.7344

Epoch 00012: val_accuracy did not improve from 0.73438
Epoch 13/20
8/8 [==============================] - 14s 2s/step - loss: 0.5133 - accuracy: 0.7945 - val_loss: 0.7749 - val_accuracy: 0.6875

Epoch 00013: val_accuracy did not improve from 0.73438
Epoch 14/20
8/8 [==============================] - 15s 2s/step - loss: 0.5209 - accuracy: 0.7940 - val_loss: 0.6181 - val_accuracy: 0.8281

Epoch 00014: val_accuracy improved from 0.73438 to 0.82812, saving model to models\best_model_esc10_exp_1_7
Epoch 15/20
8/8 [==============================] - 16s 2s/step - loss: 0.4832 - accuracy: 0.8091 - val_loss: 0.6331 - val_accuracy: 0.8125

Epoch 00015: val_accuracy did not improve from 0.82812
Epoch 16/20
8/8 [==============================] - 16s 2s/step - loss: 0.4071 - accuracy: 0.8465 - val_loss: 0.5870 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.82812
Epoch 17/20
8/8 [==============================] - 15s 2s/step - loss: 0.4157 - accuracy: 0.8636 - val_loss: 0.7757 - val_accuracy: 0.7500

Epoch 00017: val_accuracy did not improve from 0.82812
Epoch 18/20
8/8 [==============================] - 15s 2s/step - loss: 0.4130 - accuracy: 0.8579 - val_loss: 0.6047 - val_accuracy: 0.8594

Epoch 00018: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_1_7
Epoch 19/20
8/8 [==============================] - 14s 2s/step - loss: 0.3844 - accuracy: 0.8614 - val_loss: 0.5691 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.85938
Epoch 20/20
8/8 [==============================] - 15s 2s/step - loss: 0.3744 - accuracy: 0.8960 - val_loss: 0.6839 - val_accuracy: 0.8281

Epoch 00020: val_accuracy did not improve from 0.85938
Test accuracy:  0.7875000238418579
Epoch 1/20
8/8 [==============================] - 18s 2s/step - loss: 2.2828 - accuracy: 0.1169 - val_loss: 1.9944 - val_accuracy: 0.2812

Epoch 00001: val_accuracy improved from -inf to 0.28125, saving model to models\best_model_esc10_exp_1_8
Epoch 2/20
8/8 [==============================] - 17s 2s/step - loss: 1.9943 - accuracy: 0.2864 - val_loss: 1.6989 - val_accuracy: 0.3750

Epoch 00002: val_accuracy improved from 0.28125 to 0.37500, saving model to models\best_model_esc10_exp_1_8
Epoch 3/20
8/8 [==============================] - 17s 2s/step - loss: 1.8736 - accuracy: 0.2605 - val_loss: 1.5725 - val_accuracy: 0.3750

Epoch 00003: val_accuracy did not improve from 0.37500
Epoch 4/20
8/8 [==============================] - 17s 2s/step - loss: 1.6766 - accuracy: 0.3334 - val_loss: 1.3294 - val_accuracy: 0.4844

Epoch 00004: val_accuracy improved from 0.37500 to 0.48438, saving model to models\best_model_esc10_exp_1_8
Epoch 5/20
8/8 [==============================] - 16s 2s/step - loss: 1.4826 - accuracy: 0.4261 - val_loss: 1.2341 - val_accuracy: 0.6094

Epoch 00005: val_accuracy improved from 0.48438 to 0.60938, saving model to models\best_model_esc10_exp_1_8
Epoch 6/20
8/8 [==============================] - 15s 2s/step - loss: 1.3929 - accuracy: 0.4977 - val_loss: 1.0594 - val_accuracy: 0.5938

Epoch 00006: val_accuracy did not improve from 0.60938
Epoch 7/20
8/8 [==============================] - 17s 2s/step - loss: 1.2498 - accuracy: 0.5559 - val_loss: 0.9030 - val_accuracy: 0.6562

Epoch 00007: val_accuracy improved from 0.60938 to 0.65625, saving model to models\best_model_esc10_exp_1_8
Epoch 8/20
8/8 [==============================] - 17s 2s/step - loss: 1.1979 - accuracy: 0.5364 - val_loss: 0.7829 - val_accuracy: 0.7031

Epoch 00008: val_accuracy improved from 0.65625 to 0.70312, saving model to models\best_model_esc10_exp_1_8
Epoch 9/20
8/8 [==============================] - 17s 2s/step - loss: 0.9947 - accuracy: 0.6512 - val_loss: 0.7929 - val_accuracy: 0.7188

Epoch 00009: val_accuracy improved from 0.70312 to 0.71875, saving model to models\best_model_esc10_exp_1_8
Epoch 10/20
8/8 [==============================] - 17s 2s/step - loss: 0.9226 - accuracy: 0.6775 - val_loss: 0.6958 - val_accuracy: 0.8281

Epoch 00010: val_accuracy improved from 0.71875 to 0.82812, saving model to models\best_model_esc10_exp_1_8
Epoch 11/20
8/8 [==============================] - 16s 2s/step - loss: 0.8826 - accuracy: 0.7256 - val_loss: 0.5501 - val_accuracy: 0.8281

Epoch 00011: val_accuracy did not improve from 0.82812
Epoch 12/20
8/8 [==============================] - 17s 2s/step - loss: 0.7194 - accuracy: 0.7393 - val_loss: 0.5095 - val_accuracy: 0.8281

Epoch 00012: val_accuracy did not improve from 0.82812
Epoch 13/20
8/8 [==============================] - 16s 2s/step - loss: 0.6753 - accuracy: 0.7291 - val_loss: 0.5768 - val_accuracy: 0.7500

Epoch 00013: val_accuracy did not improve from 0.82812
Epoch 14/20
8/8 [==============================] - 16s 2s/step - loss: 0.7022 - accuracy: 0.7500 - val_loss: 0.5685 - val_accuracy: 0.8281

Epoch 00014: val_accuracy did not improve from 0.82812
Epoch 15/20
8/8 [==============================] - 15s 2s/step - loss: 0.6701 - accuracy: 0.7718 - val_loss: 0.5982 - val_accuracy: 0.7969

Epoch 00015: val_accuracy did not improve from 0.82812
Epoch 16/20
8/8 [==============================] - 17s 2s/step - loss: 0.5558 - accuracy: 0.7984 - val_loss: 0.4414 - val_accuracy: 0.8281

Epoch 00016: val_accuracy did not improve from 0.82812
Epoch 17/20
8/8 [==============================] - 17s 2s/step - loss: 0.4395 - accuracy: 0.8398 - val_loss: 0.4496 - val_accuracy: 0.8438

Epoch 00017: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_1_8
Epoch 18/20
8/8 [==============================] - 17s 2s/step - loss: 0.5226 - accuracy: 0.8372 - val_loss: 0.6134 - val_accuracy: 0.8438

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
8/8 [==============================] - 17s 2s/step - loss: 0.4059 - accuracy: 0.8644 - val_loss: 0.3427 - val_accuracy: 0.8906

Epoch 00019: val_accuracy improved from 0.84375 to 0.89062, saving model to models\best_model_esc10_exp_1_8
Epoch 20/20
8/8 [==============================] - 16s 2s/step - loss: 0.3601 - accuracy: 0.8767 - val_loss: 0.4296 - val_accuracy: 0.9062

Epoch 00020: val_accuracy improved from 0.89062 to 0.90625, saving model to models\best_model_esc10_exp_1_8
Test accuracy:  0.862500011920929
Epoch 1/20
8/8 [==============================] - 17s 2s/step - loss: 2.2515 - accuracy: 0.1433 - val_loss: 1.8122 - val_accuracy: 0.3125

Epoch 00001: val_accuracy improved from -inf to 0.31250, saving model to models\best_model_esc10_exp_1_9
Epoch 2/20
8/8 [==============================] - 17s 2s/step - loss: 2.0237 - accuracy: 0.2325 - val_loss: 1.5479 - val_accuracy: 0.5000

Epoch 00002: val_accuracy improved from 0.31250 to 0.50000, saving model to models\best_model_esc10_exp_1_9
Epoch 3/20
8/8 [==============================] - 18s 2s/step - loss: 1.8205 - accuracy: 0.3176 - val_loss: 1.4398 - val_accuracy: 0.4844

Epoch 00003: val_accuracy did not improve from 0.50000
Epoch 4/20
8/8 [==============================] - 18s 2s/step - loss: 1.7503 - accuracy: 0.3340 - val_loss: 1.2121 - val_accuracy: 0.6094

Epoch 00004: val_accuracy improved from 0.50000 to 0.60938, saving model to models\best_model_esc10_exp_1_9
Epoch 5/20
8/8 [==============================] - 18s 2s/step - loss: 1.4081 - accuracy: 0.5049 - val_loss: 0.9321 - val_accuracy: 0.5938

Epoch 00005: val_accuracy did not improve from 0.60938
Epoch 6/20
8/8 [==============================] - 17s 2s/step - loss: 1.1228 - accuracy: 0.5790 - val_loss: 0.7806 - val_accuracy: 0.7344

Epoch 00006: val_accuracy improved from 0.60938 to 0.73438, saving model to models\best_model_esc10_exp_1_9
Epoch 7/20
8/8 [==============================] - 17s 2s/step - loss: 1.0305 - accuracy: 0.6394 - val_loss: 0.6837 - val_accuracy: 0.7656

Epoch 00007: val_accuracy improved from 0.73438 to 0.76562, saving model to models\best_model_esc10_exp_1_9
Epoch 8/20
8/8 [==============================] - 16s 2s/step - loss: 0.9022 - accuracy: 0.6748 - val_loss: 0.7386 - val_accuracy: 0.7344

Epoch 00008: val_accuracy did not improve from 0.76562
Epoch 9/20
8/8 [==============================] - 18s 2s/step - loss: 0.8015 - accuracy: 0.6979 - val_loss: 0.6808 - val_accuracy: 0.7500

Epoch 00009: val_accuracy did not improve from 0.76562
Epoch 10/20
8/8 [==============================] - 18s 2s/step - loss: 0.7768 - accuracy: 0.7087 - val_loss: 0.6378 - val_accuracy: 0.7969

Epoch 00010: val_accuracy improved from 0.76562 to 0.79688, saving model to models\best_model_esc10_exp_1_9
Epoch 11/20
8/8 [==============================] - 18s 2s/step - loss: 0.6284 - accuracy: 0.7708 - val_loss: 0.6496 - val_accuracy: 0.7812

Epoch 00011: val_accuracy did not improve from 0.79688
Epoch 12/20
8/8 [==============================] - 18s 2s/step - loss: 0.6068 - accuracy: 0.7787 - val_loss: 0.6485 - val_accuracy: 0.7969

Epoch 00012: val_accuracy did not improve from 0.79688
Epoch 13/20
8/8 [==============================] - 18s 2s/step - loss: 0.4992 - accuracy: 0.8085 - val_loss: 0.5895 - val_accuracy: 0.8125

Epoch 00013: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_1_9
Epoch 14/20
8/8 [==============================] - 17s 2s/step - loss: 0.4893 - accuracy: 0.8269 - val_loss: 0.6168 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.81250
Epoch 15/20
8/8 [==============================] - 17s 2s/step - loss: 0.4194 - accuracy: 0.8505 - val_loss: 0.6416 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.81250
Epoch 16/20
8/8 [==============================] - 16s 2s/step - loss: 0.3612 - accuracy: 0.8745 - val_loss: 0.5762 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
8/8 [==============================] - 18s 2s/step - loss: 0.3584 - accuracy: 0.8685 - val_loss: 0.5397 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
8/8 [==============================] - 19s 2s/step - loss: 0.3488 - accuracy: 0.8872 - val_loss: 0.6345 - val_accuracy: 0.8281

Epoch 00018: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_1_9
Epoch 19/20
8/8 [==============================] - 18s 2s/step - loss: 0.2914 - accuracy: 0.9003 - val_loss: 0.7728 - val_accuracy: 0.8125

Epoch 00019: val_accuracy did not improve from 0.82812
Epoch 20/20
8/8 [==============================] - 18s 2s/step - loss: 0.3151 - accuracy: 0.8914 - val_loss: 0.5981 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.82812
Test accuracy:  0.824999988079071
Epoch 1/20
8/8 [==============================] - 19s 2s/step - loss: 2.2806 - accuracy: 0.1065 - val_loss: 2.0395 - val_accuracy: 0.1406

Epoch 00001: val_accuracy improved from -inf to 0.14062, saving model to models\best_model_esc10_exp_1_10
Epoch 2/20
8/8 [==============================] - 18s 2s/step - loss: 2.0593 - accuracy: 0.2425 - val_loss: 1.6970 - val_accuracy: 0.3281

Epoch 00002: val_accuracy improved from 0.14062 to 0.32812, saving model to models\best_model_esc10_exp_1_10
Epoch 3/20
8/8 [==============================] - 17s 2s/step - loss: 1.8462 - accuracy: 0.3009 - val_loss: 1.4607 - val_accuracy: 0.5781

Epoch 00003: val_accuracy improved from 0.32812 to 0.57812, saving model to models\best_model_esc10_exp_1_10
Epoch 4/20
8/8 [==============================] - 16s 2s/step - loss: 1.5939 - accuracy: 0.4001 - val_loss: 1.0611 - val_accuracy: 0.6406

Epoch 00004: val_accuracy improved from 0.57812 to 0.64062, saving model to models\best_model_esc10_exp_1_10
Epoch 5/20
8/8 [==============================] - 18s 2s/step - loss: 1.3905 - accuracy: 0.4698 - val_loss: 0.9324 - val_accuracy: 0.7188

Epoch 00005: val_accuracy improved from 0.64062 to 0.71875, saving model to models\best_model_esc10_exp_1_10
Epoch 6/20
8/8 [==============================] - 18s 2s/step - loss: 1.1471 - accuracy: 0.5904 - val_loss: 0.7777 - val_accuracy: 0.7812

Epoch 00006: val_accuracy improved from 0.71875 to 0.78125, saving model to models\best_model_esc10_exp_1_10
Epoch 7/20
8/8 [==============================] - 18s 2s/step - loss: 0.9642 - accuracy: 0.6536 - val_loss: 0.7274 - val_accuracy: 0.7656

Epoch 00007: val_accuracy did not improve from 0.78125
Epoch 8/20
8/8 [==============================] - 18s 2s/step - loss: 1.0168 - accuracy: 0.6282 - val_loss: 0.6578 - val_accuracy: 0.7969

Epoch 00008: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_1_10
Epoch 9/20
8/8 [==============================] - 18s 2s/step - loss: 0.8983 - accuracy: 0.7008 - val_loss: 0.6333 - val_accuracy: 0.7656

Epoch 00009: val_accuracy did not improve from 0.79688
Epoch 10/20
8/8 [==============================] - 17s 2s/step - loss: 0.8065 - accuracy: 0.7002 - val_loss: 0.5856 - val_accuracy: 0.8281

Epoch 00010: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_1_10
Epoch 11/20
8/8 [==============================] - 17s 2s/step - loss: 0.7196 - accuracy: 0.7632 - val_loss: 0.5043 - val_accuracy: 0.8125

Epoch 00011: val_accuracy did not improve from 0.82812
Epoch 12/20
8/8 [==============================] - 18s 2s/step - loss: 0.6121 - accuracy: 0.7860 - val_loss: 0.5145 - val_accuracy: 0.8281

Epoch 00012: val_accuracy did not improve from 0.82812
Epoch 13/20
8/8 [==============================] - 18s 2s/step - loss: 0.5720 - accuracy: 0.7981 - val_loss: 0.4945 - val_accuracy: 0.8594

Epoch 00013: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_1_10
Epoch 14/20
8/8 [==============================] - 18s 2s/step - loss: 0.5235 - accuracy: 0.7888 - val_loss: 0.4812 - val_accuracy: 0.8594

Epoch 00014: val_accuracy did not improve from 0.85938
Epoch 15/20
8/8 [==============================] - 17s 2s/step - loss: 0.4806 - accuracy: 0.8192 - val_loss: 0.5659 - val_accuracy: 0.8438

Epoch 00015: val_accuracy did not improve from 0.85938
Epoch 16/20
8/8 [==============================] - 16s 2s/step - loss: 0.4808 - accuracy: 0.8333 - val_loss: 0.4271 - val_accuracy: 0.8906

Epoch 00016: val_accuracy improved from 0.85938 to 0.89062, saving model to models\best_model_esc10_exp_1_10
Epoch 17/20
8/8 [==============================] - 16s 2s/step - loss: 0.3715 - accuracy: 0.8764 - val_loss: 0.4524 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.89062
Epoch 18/20
8/8 [==============================] - 19s 2s/step - loss: 0.3676 - accuracy: 0.8763 - val_loss: 0.4646 - val_accuracy: 0.8594

Epoch 00018: val_accuracy did not improve from 0.89062
Epoch 19/20
8/8 [==============================] - 19s 2s/step - loss: 0.4003 - accuracy: 0.8506 - val_loss: 0.5223 - val_accuracy: 0.8594

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
8/8 [==============================] - 19s 2s/step - loss: 0.3905 - accuracy: 0.8431 - val_loss: 0.4381 - val_accuracy: 0.8750

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.887499988079071
Epoch 1/20
8/8 [==============================] - 20s 2s/step - loss: 2.2775 - accuracy: 0.1171 - val_loss: 1.8796 - val_accuracy: 0.3281

Epoch 00001: val_accuracy improved from -inf to 0.32812, saving model to models\best_model_esc10_exp_1_11
Epoch 2/20
8/8 [==============================] - 19s 2s/step - loss: 1.9876 - accuracy: 0.2553 - val_loss: 1.6332 - val_accuracy: 0.4688

Epoch 00002: val_accuracy improved from 0.32812 to 0.46875, saving model to models\best_model_esc10_exp_1_11
Epoch 3/20
8/8 [==============================] - 18s 2s/step - loss: 1.8147 - accuracy: 0.3136 - val_loss: 1.3869 - val_accuracy: 0.5938

Epoch 00003: val_accuracy improved from 0.46875 to 0.59375, saving model to models\best_model_esc10_exp_1_11
Epoch 4/20
8/8 [==============================] - 17s 2s/step - loss: 1.5419 - accuracy: 0.4297 - val_loss: 1.1586 - val_accuracy: 0.6406

Epoch 00004: val_accuracy improved from 0.59375 to 0.64062, saving model to models\best_model_esc10_exp_1_11
Epoch 5/20
8/8 [==============================] - 17s 2s/step - loss: 1.4548 - accuracy: 0.4634 - val_loss: 0.9868 - val_accuracy: 0.6562

Epoch 00005: val_accuracy improved from 0.64062 to 0.65625, saving model to models\best_model_esc10_exp_1_11
Epoch 6/20
8/8 [==============================] - 19s 2s/step - loss: 1.2133 - accuracy: 0.5665 - val_loss: 1.0078 - val_accuracy: 0.6562

Epoch 00006: val_accuracy did not improve from 0.65625
Epoch 7/20
8/8 [==============================] - 20s 2s/step - loss: 1.0674 - accuracy: 0.6033 - val_loss: 0.7602 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.65625 to 0.70312, saving model to models\best_model_esc10_exp_1_11
Epoch 8/20
8/8 [==============================] - 19s 2s/step - loss: 0.9730 - accuracy: 0.5952 - val_loss: 0.6552 - val_accuracy: 0.7500

Epoch 00008: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_1_11
Epoch 9/20
8/8 [==============================] - 19s 2s/step - loss: 0.8407 - accuracy: 0.6843 - val_loss: 0.6389 - val_accuracy: 0.8125

Epoch 00009: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_1_11
Epoch 10/20
8/8 [==============================] - 18s 2s/step - loss: 0.8132 - accuracy: 0.6938 - val_loss: 0.6619 - val_accuracy: 0.7344

Epoch 00010: val_accuracy did not improve from 0.81250
Epoch 11/20
8/8 [==============================] - 17s 2s/step - loss: 0.7222 - accuracy: 0.7156 - val_loss: 0.6268 - val_accuracy: 0.7188

Epoch 00011: val_accuracy did not improve from 0.81250
Epoch 12/20
8/8 [==============================] - 17s 2s/step - loss: 0.6720 - accuracy: 0.7767 - val_loss: 0.6675 - val_accuracy: 0.7656

Epoch 00012: val_accuracy did not improve from 0.81250
Epoch 13/20
8/8 [==============================] - 18s 2s/step - loss: 0.6043 - accuracy: 0.7793 - val_loss: 0.5133 - val_accuracy: 0.8281

Epoch 00013: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_1_11
Epoch 14/20
8/8 [==============================] - 20s 2s/step - loss: 0.5348 - accuracy: 0.8138 - val_loss: 0.4628 - val_accuracy: 0.8438

Epoch 00014: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_1_11
Epoch 15/20
8/8 [==============================] - 19s 2s/step - loss: 0.5449 - accuracy: 0.7897 - val_loss: 0.4947 - val_accuracy: 0.8281

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
8/8 [==============================] - 19s 2s/step - loss: 0.4642 - accuracy: 0.8266 - val_loss: 0.5178 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
8/8 [==============================] - 20s 3s/step - loss: 0.4287 - accuracy: 0.8682 - val_loss: 0.5931 - val_accuracy: 0.7656

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
8/8 [==============================] - 19s 2s/step - loss: 0.3989 - accuracy: 0.8836 - val_loss: 0.5781 - val_accuracy: 0.8281

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
8/8 [==============================] - 17s 2s/step - loss: 0.3993 - accuracy: 0.8706 - val_loss: 0.5888 - val_accuracy: 0.8281

Epoch 00019: val_accuracy did not improve from 0.84375
Epoch 20/20
8/8 [==============================] - 19s 2s/step - loss: 0.3355 - accuracy: 0.8703 - val_loss: 0.6210 - val_accuracy: 0.7969

Epoch 00020: val_accuracy did not improve from 0.84375
Test accuracy:  0.8125
Epoch 1/20
8/8 [==============================] - 20s 3s/step - loss: 2.2820 - accuracy: 0.1054 - val_loss: 1.8378 - val_accuracy: 0.2656

Epoch 00001: val_accuracy improved from -inf to 0.26562, saving model to models\best_model_esc10_exp_1_12
Epoch 2/20
8/8 [==============================] - 20s 3s/step - loss: 1.9644 - accuracy: 0.2453 - val_loss: 1.7229 - val_accuracy: 0.3594

Epoch 00002: val_accuracy improved from 0.26562 to 0.35938, saving model to models\best_model_esc10_exp_1_12
Epoch 3/20
8/8 [==============================] - 20s 3s/step - loss: 1.8625 - accuracy: 0.2784 - val_loss: 1.6334 - val_accuracy: 0.3594

Epoch 00003: val_accuracy did not improve from 0.35938
Epoch 4/20
8/8 [==============================] - 19s 2s/step - loss: 1.8329 - accuracy: 0.3345 - val_loss: 1.3353 - val_accuracy: 0.5938

Epoch 00004: val_accuracy improved from 0.35938 to 0.59375, saving model to models\best_model_esc10_exp_1_12
Epoch 5/20
8/8 [==============================] - 18s 2s/step - loss: 1.5016 - accuracy: 0.4347 - val_loss: 1.0463 - val_accuracy: 0.6562

Epoch 00005: val_accuracy improved from 0.59375 to 0.65625, saving model to models\best_model_esc10_exp_1_12
Epoch 6/20
8/8 [==============================] - 18s 2s/step - loss: 1.2839 - accuracy: 0.5408 - val_loss: 1.0678 - val_accuracy: 0.6094

Epoch 00006: val_accuracy did not improve from 0.65625
Epoch 7/20
8/8 [==============================] - 20s 3s/step - loss: 1.2466 - accuracy: 0.5444 - val_loss: 0.8090 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.65625 to 0.70312, saving model to models\best_model_esc10_exp_1_12
Epoch 8/20
8/8 [==============================] - 21s 3s/step - loss: 1.0685 - accuracy: 0.6232 - val_loss: 0.7692 - val_accuracy: 0.7969

Epoch 00008: val_accuracy improved from 0.70312 to 0.79688, saving model to models\best_model_esc10_exp_1_12
Epoch 9/20
8/8 [==============================] - 20s 2s/step - loss: 0.9076 - accuracy: 0.6889 - val_loss: 0.6110 - val_accuracy: 0.8281

Epoch 00009: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_1_12
Epoch 10/20
8/8 [==============================] - 19s 2s/step - loss: 0.7931 - accuracy: 0.7414 - val_loss: 0.5887 - val_accuracy: 0.8281

Epoch 00010: val_accuracy did not improve from 0.82812
Epoch 11/20
8/8 [==============================] - 18s 2s/step - loss: 0.7830 - accuracy: 0.7268 - val_loss: 0.7240 - val_accuracy: 0.7344

Epoch 00011: val_accuracy did not improve from 0.82812
Epoch 12/20
8/8 [==============================] - 18s 2s/step - loss: 0.7589 - accuracy: 0.7464 - val_loss: 0.5829 - val_accuracy: 0.8281

Epoch 00012: val_accuracy did not improve from 0.82812
Epoch 13/20
8/8 [==============================] - 21s 3s/step - loss: 0.5725 - accuracy: 0.7937 - val_loss: 0.5489 - val_accuracy: 0.8594

Epoch 00013: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_1_12
Epoch 14/20
8/8 [==============================] - 21s 3s/step - loss: 0.5490 - accuracy: 0.7901 - val_loss: 0.5820 - val_accuracy: 0.8438

Epoch 00014: val_accuracy did not improve from 0.85938
Epoch 15/20
8/8 [==============================] - 20s 3s/step - loss: 0.4997 - accuracy: 0.8266 - val_loss: 0.4996 - val_accuracy: 0.7969

Epoch 00015: val_accuracy did not improve from 0.85938
Epoch 16/20
8/8 [==============================] - 19s 2s/step - loss: 0.4956 - accuracy: 0.8462 - val_loss: 0.6022 - val_accuracy: 0.8125

Epoch 00016: val_accuracy did not improve from 0.85938
Epoch 17/20
8/8 [==============================] - 18s 2s/step - loss: 0.4277 - accuracy: 0.8213 - val_loss: 0.4956 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.85938
Epoch 18/20
8/8 [==============================] - 18s 2s/step - loss: 0.3951 - accuracy: 0.8727 - val_loss: 0.6176 - val_accuracy: 0.8281

Epoch 00018: val_accuracy did not improve from 0.85938
Epoch 19/20
8/8 [==============================] - 20s 3s/step - loss: 0.5300 - accuracy: 0.8072 - val_loss: 0.6391 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.85938
Epoch 20/20
8/8 [==============================] - 21s 3s/step - loss: 0.3980 - accuracy: 0.8861 - val_loss: 0.5020 - val_accuracy: 0.8594

Epoch 00020: val_accuracy did not improve from 0.85938
Test accuracy:  0.8374999761581421
Epoch 1/20
8/8 [==============================] - 21s 3s/step - loss: 2.3017 - accuracy: 0.1176 - val_loss: 2.1065 - val_accuracy: 0.2188

Epoch 00001: val_accuracy improved from -inf to 0.21875, saving model to models\best_model_esc10_exp_1_13
Epoch 2/20
8/8 [==============================] - 20s 3s/step - loss: 2.1323 - accuracy: 0.1882 - val_loss: 1.6799 - val_accuracy: 0.3750

Epoch 00002: val_accuracy improved from 0.21875 to 0.37500, saving model to models\best_model_esc10_exp_1_13
Epoch 3/20
8/8 [==============================] - 19s 2s/step - loss: 1.8743 - accuracy: 0.2812 - val_loss: 1.5632 - val_accuracy: 0.3594

Epoch 00003: val_accuracy did not improve from 0.37500
Epoch 4/20
8/8 [==============================] - 18s 2s/step - loss: 1.6928 - accuracy: 0.3320 - val_loss: 1.3562 - val_accuracy: 0.5156

Epoch 00004: val_accuracy improved from 0.37500 to 0.51562, saving model to models\best_model_esc10_exp_1_13
Epoch 5/20
8/8 [==============================] - 18s 2s/step - loss: 1.5645 - accuracy: 0.4125 - val_loss: 1.2141 - val_accuracy: 0.5312

Epoch 00005: val_accuracy improved from 0.51562 to 0.53125, saving model to models\best_model_esc10_exp_1_13
Epoch 6/20
8/8 [==============================] - 19s 2s/step - loss: 1.4225 - accuracy: 0.4906 - val_loss: 0.9766 - val_accuracy: 0.7344

Epoch 00006: val_accuracy improved from 0.53125 to 0.73438, saving model to models\best_model_esc10_exp_1_13
Epoch 7/20
8/8 [==============================] - 21s 3s/step - loss: 1.2364 - accuracy: 0.5519 - val_loss: 0.8753 - val_accuracy: 0.6562

Epoch 00007: val_accuracy did not improve from 0.73438
Epoch 8/20
8/8 [==============================] - 22s 3s/step - loss: 1.1554 - accuracy: 0.5605 - val_loss: 0.8534 - val_accuracy: 0.7188

Epoch 00008: val_accuracy did not improve from 0.73438
Epoch 9/20
8/8 [==============================] - 21s 3s/step - loss: 1.0224 - accuracy: 0.6105 - val_loss: 0.8694 - val_accuracy: 0.6875

Epoch 00009: val_accuracy did not improve from 0.73438
Epoch 10/20
8/8 [==============================] - 20s 3s/step - loss: 1.0122 - accuracy: 0.6286 - val_loss: 0.6708 - val_accuracy: 0.7500

Epoch 00010: val_accuracy improved from 0.73438 to 0.75000, saving model to models\best_model_esc10_exp_1_13
Epoch 11/20
8/8 [==============================] - 19s 2s/step - loss: 0.8453 - accuracy: 0.6850 - val_loss: 0.6469 - val_accuracy: 0.7500

Epoch 00011: val_accuracy did not improve from 0.75000
Epoch 12/20
8/8 [==============================] - 18s 2s/step - loss: 0.7392 - accuracy: 0.7340 - val_loss: 0.5331 - val_accuracy: 0.7969

Epoch 00012: val_accuracy improved from 0.75000 to 0.79688, saving model to models\best_model_esc10_exp_1_13
Epoch 13/20
8/8 [==============================] - 19s 2s/step - loss: 0.7017 - accuracy: 0.7395 - val_loss: 0.5093 - val_accuracy: 0.7969

Epoch 00013: val_accuracy did not improve from 0.79688
Epoch 14/20
8/8 [==============================] - 21s 3s/step - loss: 0.6379 - accuracy: 0.7417 - val_loss: 0.5410 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.79688
Epoch 15/20
8/8 [==============================] - 21s 3s/step - loss: 0.5267 - accuracy: 0.8014 - val_loss: 0.5642 - val_accuracy: 0.7656

Epoch 00015: val_accuracy did not improve from 0.79688
Epoch 16/20
8/8 [==============================] - 20s 3s/step - loss: 0.5113 - accuracy: 0.8010 - val_loss: 0.5167 - val_accuracy: 0.8281

Epoch 00016: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_1_13
Epoch 17/20
8/8 [==============================] - 19s 2s/step - loss: 0.4695 - accuracy: 0.8103 - val_loss: 0.6608 - val_accuracy: 0.7812

Epoch 00017: val_accuracy did not improve from 0.82812
Epoch 18/20
8/8 [==============================] - 20s 3s/step - loss: 0.4986 - accuracy: 0.8216 - val_loss: 0.5476 - val_accuracy: 0.8125

Epoch 00018: val_accuracy did not improve from 0.82812
Epoch 19/20
8/8 [==============================] - 21s 3s/step - loss: 0.4517 - accuracy: 0.8335 - val_loss: 0.6060 - val_accuracy: 0.7969

Epoch 00019: val_accuracy did not improve from 0.82812
Epoch 20/20
8/8 [==============================] - 20s 3s/step - loss: 0.3683 - accuracy: 0.8905 - val_loss: 0.6207 - val_accuracy: 0.7812

Epoch 00020: val_accuracy did not improve from 0.82812
Test accuracy:  0.8500000238418579
Epoch 1/20
8/8 [==============================] - 13s 2s/step - loss: 2.2877 - accuracy: 0.1525 - val_loss: 1.9364 - val_accuracy: 0.2344

Epoch 00001: val_accuracy improved from -inf to 0.23438, saving model to models\best_model_esc10_exp_1_14
Epoch 2/20
8/8 [==============================] - 12s 1s/step - loss: 1.9731 - accuracy: 0.2518 - val_loss: 1.6292 - val_accuracy: 0.3438

Epoch 00002: val_accuracy improved from 0.23438 to 0.34375, saving model to models\best_model_esc10_exp_1_14
Epoch 3/20
8/8 [==============================] - 11s 1s/step - loss: 1.7734 - accuracy: 0.2903 - val_loss: 1.4056 - val_accuracy: 0.5469

Epoch 00003: val_accuracy improved from 0.34375 to 0.54688, saving model to models\best_model_esc10_exp_1_14
Epoch 4/20
8/8 [==============================] - 12s 2s/step - loss: 1.5432 - accuracy: 0.4172 - val_loss: 1.1091 - val_accuracy: 0.5625

Epoch 00004: val_accuracy improved from 0.54688 to 0.56250, saving model to models\best_model_esc10_exp_1_14
Epoch 5/20
8/8 [==============================] - 14s 2s/step - loss: 1.3238 - accuracy: 0.5207 - val_loss: 0.9393 - val_accuracy: 0.6719

Epoch 00005: val_accuracy improved from 0.56250 to 0.67188, saving model to models\best_model_esc10_exp_1_14
Epoch 6/20
8/8 [==============================] - 14s 2s/step - loss: 1.1368 - accuracy: 0.5774 - val_loss: 0.8638 - val_accuracy: 0.6875

Epoch 00006: val_accuracy improved from 0.67188 to 0.68750, saving model to models\best_model_esc10_exp_1_14
Epoch 7/20
8/8 [==============================] - 13s 2s/step - loss: 1.1124 - accuracy: 0.5918 - val_loss: 0.7797 - val_accuracy: 0.6875

Epoch 00007: val_accuracy did not improve from 0.68750
Epoch 8/20
8/8 [==============================] - 13s 2s/step - loss: 0.9737 - accuracy: 0.6474 - val_loss: 0.6341 - val_accuracy: 0.7812

Epoch 00008: val_accuracy improved from 0.68750 to 0.78125, saving model to models\best_model_esc10_exp_1_14
Epoch 9/20
8/8 [==============================] - 13s 2s/step - loss: 0.8480 - accuracy: 0.6797 - val_loss: 0.6970 - val_accuracy: 0.7969

Epoch 00009: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_1_14
Epoch 10/20
8/8 [==============================] - 13s 2s/step - loss: 0.7667 - accuracy: 0.7168 - val_loss: 0.5522 - val_accuracy: 0.8281

Epoch 00010: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_1_14
Epoch 11/20
8/8 [==============================] - 13s 2s/step - loss: 0.7256 - accuracy: 0.7420 - val_loss: 0.6853 - val_accuracy: 0.7031

Epoch 00011: val_accuracy did not improve from 0.82812
Epoch 12/20
8/8 [==============================] - 13s 2s/step - loss: 0.7571 - accuracy: 0.7058 - val_loss: 0.6304 - val_accuracy: 0.8125

Epoch 00012: val_accuracy did not improve from 0.82812
Epoch 13/20
8/8 [==============================] - 12s 2s/step - loss: 0.5856 - accuracy: 0.7900 - val_loss: 0.5619 - val_accuracy: 0.8281

Epoch 00013: val_accuracy did not improve from 0.82812
Epoch 14/20
8/8 [==============================] - 12s 1s/step - loss: 0.5425 - accuracy: 0.8197 - val_loss: 0.7042 - val_accuracy: 0.7812

Epoch 00014: val_accuracy did not improve from 0.82812
Epoch 15/20
8/8 [==============================] - 13s 2s/step - loss: 0.5663 - accuracy: 0.7800 - val_loss: 0.5797 - val_accuracy: 0.7656

Epoch 00015: val_accuracy did not improve from 0.82812
Epoch 16/20
8/8 [==============================] - 14s 2s/step - loss: 0.5006 - accuracy: 0.8153 - val_loss: 0.6673 - val_accuracy: 0.8125

Epoch 00016: val_accuracy did not improve from 0.82812
Epoch 17/20
8/8 [==============================] - 13s 2s/step - loss: 0.4106 - accuracy: 0.8496 - val_loss: 0.6039 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.82812
Epoch 18/20
8/8 [==============================] - 13s 2s/step - loss: 0.3959 - accuracy: 0.8638 - val_loss: 0.6738 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.82812
Epoch 19/20
8/8 [==============================] - 13s 2s/step - loss: 0.3953 - accuracy: 0.8455 - val_loss: 0.6547 - val_accuracy: 0.8125

Epoch 00019: val_accuracy did not improve from 0.82812
Epoch 20/20
8/8 [==============================] - 13s 2s/step - loss: 0.3495 - accuracy: 0.8696 - val_loss: 0.6561 - val_accuracy: 0.8594

Epoch 00020: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_1_14
Test accuracy:  0.875
Epoch 1/20
8/8 [==============================] - 13s 2s/step - loss: 2.2660 - accuracy: 0.1287 - val_loss: 1.7659 - val_accuracy: 0.3125

Epoch 00001: val_accuracy improved from -inf to 0.31250, saving model to models\best_model_esc10_exp_1_15
Epoch 2/20
8/8 [==============================] - 12s 2s/step - loss: 1.9564 - accuracy: 0.2671 - val_loss: 1.5443 - val_accuracy: 0.3281

Epoch 00002: val_accuracy improved from 0.31250 to 0.32812, saving model to models\best_model_esc10_exp_1_15
Epoch 3/20
8/8 [==============================] - 12s 1s/step - loss: 1.7870 - accuracy: 0.3515 - val_loss: 1.1895 - val_accuracy: 0.6719

Epoch 00003: val_accuracy improved from 0.32812 to 0.67188, saving model to models\best_model_esc10_exp_1_15
Epoch 4/20
8/8 [==============================] - 12s 1s/step - loss: 1.4852 - accuracy: 0.4692 - val_loss: 0.9861 - val_accuracy: 0.6250

Epoch 00004: val_accuracy did not improve from 0.67188
Epoch 5/20
8/8 [==============================] - 12s 2s/step - loss: 1.2075 - accuracy: 0.5504 - val_loss: 1.1107 - val_accuracy: 0.6250

Epoch 00005: val_accuracy did not improve from 0.67188
Epoch 6/20
8/8 [==============================] - 13s 2s/step - loss: 1.1306 - accuracy: 0.5926 - val_loss: 0.8325 - val_accuracy: 0.7656

Epoch 00006: val_accuracy improved from 0.67188 to 0.76562, saving model to models\best_model_esc10_exp_1_15
Epoch 7/20
8/8 [==============================] - 13s 2s/step - loss: 0.9174 - accuracy: 0.6475 - val_loss: 0.5855 - val_accuracy: 0.8281

Epoch 00007: val_accuracy improved from 0.76562 to 0.82812, saving model to models\best_model_esc10_exp_1_15
Epoch 8/20
8/8 [==============================] - 13s 2s/step - loss: 0.8923 - accuracy: 0.6542 - val_loss: 0.5727 - val_accuracy: 0.8281

Epoch 00008: val_accuracy did not improve from 0.82812
Epoch 9/20
8/8 [==============================] - 13s 2s/step - loss: 0.7009 - accuracy: 0.7473 - val_loss: 0.5805 - val_accuracy: 0.8125

Epoch 00009: val_accuracy did not improve from 0.82812
Epoch 10/20
8/8 [==============================] - 13s 2s/step - loss: 0.7302 - accuracy: 0.7347 - val_loss: 0.5649 - val_accuracy: 0.8281

Epoch 00010: val_accuracy did not improve from 0.82812
Epoch 11/20
8/8 [==============================] - 13s 2s/step - loss: 0.5605 - accuracy: 0.8052 - val_loss: 0.4698 - val_accuracy: 0.8281

Epoch 00011: val_accuracy did not improve from 0.82812
Epoch 12/20
8/8 [==============================] - 13s 2s/step - loss: 0.5658 - accuracy: 0.7888 - val_loss: 0.4844 - val_accuracy: 0.7969

Epoch 00012: val_accuracy did not improve from 0.82812
Epoch 13/20
8/8 [==============================] - 12s 2s/step - loss: 0.5354 - accuracy: 0.8111 - val_loss: 0.4856 - val_accuracy: 0.8438

Epoch 00013: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_1_15
Epoch 14/20
8/8 [==============================] - 12s 2s/step - loss: 0.5270 - accuracy: 0.8078 - val_loss: 0.6290 - val_accuracy: 0.7656

Epoch 00014: val_accuracy did not improve from 0.84375
Epoch 15/20
8/8 [==============================] - 12s 1s/step - loss: 0.5175 - accuracy: 0.8148 - val_loss: 0.6624 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
8/8 [==============================] - 12s 2s/step - loss: 0.4071 - accuracy: 0.8313 - val_loss: 0.5478 - val_accuracy: 0.8125

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
8/8 [==============================] - 13s 2s/step - loss: 0.2858 - accuracy: 0.9073 - val_loss: 0.4346 - val_accuracy: 0.8281

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
8/8 [==============================] - 13s 2s/step - loss: 0.2999 - accuracy: 0.8918 - val_loss: 0.5139 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
8/8 [==============================] - 13s 2s/step - loss: 0.3100 - accuracy: 0.8879 - val_loss: 0.3811 - val_accuracy: 0.8750

Epoch 00019: val_accuracy improved from 0.84375 to 0.87500, saving model to models\best_model_esc10_exp_1_15
Epoch 20/20
8/8 [==============================] - 13s 2s/step - loss: 0.2972 - accuracy: 0.9065 - val_loss: 0.4429 - val_accuracy: 0.8594

Epoch 00020: val_accuracy did not improve from 0.87500
Test accuracy:  0.875
Epoch 1/20
8/8 [==============================] - 14s 2s/step - loss: 2.2960 - accuracy: 0.1411 - val_loss: 1.9676 - val_accuracy: 0.2500

Epoch 00001: val_accuracy improved from -inf to 0.25000, saving model to models\best_model_esc10_exp_1_16
Epoch 2/20
8/8 [==============================] - 13s 2s/step - loss: 2.0573 - accuracy: 0.2168 - val_loss: 1.7313 - val_accuracy: 0.3594

Epoch 00002: val_accuracy improved from 0.25000 to 0.35938, saving model to models\best_model_esc10_exp_1_16
Epoch 3/20
8/8 [==============================] - 13s 2s/step - loss: 1.8523 - accuracy: 0.2970 - val_loss: 1.5336 - val_accuracy: 0.5312

Epoch 00003: val_accuracy improved from 0.35938 to 0.53125, saving model to models\best_model_esc10_exp_1_16
Epoch 4/20
8/8 [==============================] - 12s 2s/step - loss: 1.6654 - accuracy: 0.3927 - val_loss: 1.3849 - val_accuracy: 0.4844

Epoch 00004: val_accuracy did not improve from 0.53125
Epoch 5/20
8/8 [==============================] - 12s 2s/step - loss: 1.6188 - accuracy: 0.4093 - val_loss: 1.2107 - val_accuracy: 0.5938

Epoch 00005: val_accuracy improved from 0.53125 to 0.59375, saving model to models\best_model_esc10_exp_1_16
Epoch 6/20
8/8 [==============================] - 13s 2s/step - loss: 1.4225 - accuracy: 0.4636 - val_loss: 0.9403 - val_accuracy: 0.7344

Epoch 00006: val_accuracy improved from 0.59375 to 0.73438, saving model to models\best_model_esc10_exp_1_16
Epoch 7/20
8/8 [==============================] - 13s 2s/step - loss: 1.2415 - accuracy: 0.5601 - val_loss: 0.8783 - val_accuracy: 0.7031

Epoch 00007: val_accuracy did not improve from 0.73438
Epoch 8/20
8/8 [==============================] - 13s 2s/step - loss: 1.0807 - accuracy: 0.6067 - val_loss: 0.8798 - val_accuracy: 0.7188

Epoch 00008: val_accuracy did not improve from 0.73438
Epoch 9/20
8/8 [==============================] - 13s 2s/step - loss: 0.9628 - accuracy: 0.6270 - val_loss: 0.8764 - val_accuracy: 0.6875

Epoch 00009: val_accuracy did not improve from 0.73438
Epoch 10/20
8/8 [==============================] - 13s 2s/step - loss: 0.8603 - accuracy: 0.6740 - val_loss: 0.6565 - val_accuracy: 0.7812

Epoch 00010: val_accuracy improved from 0.73438 to 0.78125, saving model to models\best_model_esc10_exp_1_16
Epoch 11/20
8/8 [==============================] - 13s 2s/step - loss: 0.6752 - accuracy: 0.7444 - val_loss: 0.7086 - val_accuracy: 0.7188

Epoch 00011: val_accuracy did not improve from 0.78125
Epoch 12/20
8/8 [==============================] - 13s 2s/step - loss: 0.6989 - accuracy: 0.7467 - val_loss: 0.7727 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.78125
Epoch 13/20
8/8 [==============================] - 13s 2s/step - loss: 0.6264 - accuracy: 0.7841 - val_loss: 0.6857 - val_accuracy: 0.7969

Epoch 00013: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_1_16
Epoch 14/20
8/8 [==============================] - 13s 2s/step - loss: 0.5413 - accuracy: 0.8154 - val_loss: 0.7657 - val_accuracy: 0.7344

Epoch 00014: val_accuracy did not improve from 0.79688
Epoch 15/20
8/8 [==============================] - 12s 2s/step - loss: 0.5604 - accuracy: 0.7916 - val_loss: 0.7779 - val_accuracy: 0.8125

Epoch 00015: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_1_16
Epoch 16/20
8/8 [==============================] - 13s 2s/step - loss: 0.5523 - accuracy: 0.8181 - val_loss: 0.8713 - val_accuracy: 0.7344

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
8/8 [==============================] - 13s 2s/step - loss: 0.5562 - accuracy: 0.8196 - val_loss: 0.8248 - val_accuracy: 0.7656

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
8/8 [==============================] - 13s 2s/step - loss: 0.4457 - accuracy: 0.8468 - val_loss: 0.7778 - val_accuracy: 0.7656

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
8/8 [==============================] - 13s 2s/step - loss: 0.4079 - accuracy: 0.8355 - val_loss: 0.7019 - val_accuracy: 0.8125

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
8/8 [==============================] - 13s 2s/step - loss: 0.3069 - accuracy: 0.8929 - val_loss: 0.8154 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.81250
Test accuracy:  0.7749999761581421
Epoch 1/20
8/8 [==============================] - 14s 2s/step - loss: 2.2866 - accuracy: 0.1447 - val_loss: 2.0073 - val_accuracy: 0.1562

Epoch 00001: val_accuracy improved from -inf to 0.15625, saving model to models\best_model_esc10_exp_1_17
Epoch 2/20
8/8 [==============================] - 11s 1s/step - loss: 2.0492 - accuracy: 0.2384 - val_loss: 1.7162 - val_accuracy: 0.4062

Epoch 00002: val_accuracy improved from 0.15625 to 0.40625, saving model to models\best_model_esc10_exp_1_17
Epoch 3/20
8/8 [==============================] - 12s 1s/step - loss: 1.8388 - accuracy: 0.3142 - val_loss: 1.6205 - val_accuracy: 0.4062

Epoch 00003: val_accuracy did not improve from 0.40625
Epoch 4/20
8/8 [==============================] - 11s 1s/step - loss: 1.7501 - accuracy: 0.3829 - val_loss: 1.4440 - val_accuracy: 0.4688

Epoch 00004: val_accuracy improved from 0.40625 to 0.46875, saving model to models\best_model_esc10_exp_1_17
Epoch 5/20
8/8 [==============================] - 10s 1s/step - loss: 1.5517 - accuracy: 0.4160 - val_loss: 1.1912 - val_accuracy: 0.5938

Epoch 00005: val_accuracy improved from 0.46875 to 0.59375, saving model to models\best_model_esc10_exp_1_17
Epoch 6/20
8/8 [==============================] - 10s 1s/step - loss: 1.3787 - accuracy: 0.4720 - val_loss: 0.9345 - val_accuracy: 0.6875

Epoch 00006: val_accuracy improved from 0.59375 to 0.68750, saving model to models\best_model_esc10_exp_1_17
Epoch 7/20
8/8 [==============================] - 12s 1s/step - loss: 1.1383 - accuracy: 0.5417 - val_loss: 0.8140 - val_accuracy: 0.7188

Epoch 00007: val_accuracy improved from 0.68750 to 0.71875, saving model to models\best_model_esc10_exp_1_17
Epoch 8/20
8/8 [==============================] - 11s 1s/step - loss: 1.0402 - accuracy: 0.6235 - val_loss: 0.7696 - val_accuracy: 0.7500

Epoch 00008: val_accuracy improved from 0.71875 to 0.75000, saving model to models\best_model_esc10_exp_1_17
Epoch 9/20
8/8 [==============================] - 12s 1s/step - loss: 1.0433 - accuracy: 0.6138 - val_loss: 0.7638 - val_accuracy: 0.7656

Epoch 00009: val_accuracy improved from 0.75000 to 0.76562, saving model to models\best_model_esc10_exp_1_17
Epoch 10/20
8/8 [==============================] - 12s 1s/step - loss: 0.8755 - accuracy: 0.6724 - val_loss: 0.7432 - val_accuracy: 0.7344

Epoch 00010: val_accuracy did not improve from 0.76562
Epoch 11/20
8/8 [==============================] - 11s 1s/step - loss: 0.8191 - accuracy: 0.7061 - val_loss: 0.5911 - val_accuracy: 0.7812

Epoch 00011: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_1_17
Epoch 12/20
8/8 [==============================] - 12s 1s/step - loss: 0.6957 - accuracy: 0.7427 - val_loss: 0.6703 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.78125
Epoch 13/20
8/8 [==============================] - 12s 1s/step - loss: 0.6231 - accuracy: 0.8183 - val_loss: 0.6195 - val_accuracy: 0.7812

Epoch 00013: val_accuracy did not improve from 0.78125
Epoch 14/20
8/8 [==============================] - 11s 1s/step - loss: 0.5854 - accuracy: 0.7767 - val_loss: 0.5786 - val_accuracy: 0.7969

Epoch 00014: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_1_17
Epoch 15/20
8/8 [==============================] - 11s 1s/step - loss: 0.5506 - accuracy: 0.8170 - val_loss: 0.6675 - val_accuracy: 0.7500

Epoch 00015: val_accuracy did not improve from 0.79688
Epoch 16/20
8/8 [==============================] - 12s 1s/step - loss: 0.4591 - accuracy: 0.8437 - val_loss: 0.5981 - val_accuracy: 0.7812

Epoch 00016: val_accuracy did not improve from 0.79688
Epoch 17/20
8/8 [==============================] - 11s 1s/step - loss: 0.4999 - accuracy: 0.8322 - val_loss: 0.5770 - val_accuracy: 0.8125

Epoch 00017: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_1_17
Epoch 18/20
8/8 [==============================] - 11s 1s/step - loss: 0.4269 - accuracy: 0.8624 - val_loss: 0.6115 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
8/8 [==============================] - 11s 1s/step - loss: 0.3957 - accuracy: 0.8676 - val_loss: 0.7559 - val_accuracy: 0.7812

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
8/8 [==============================] - 10s 1s/step - loss: 0.3813 - accuracy: 0.8707 - val_loss: 0.5452 - val_accuracy: 0.8281

Epoch 00020: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_1_17
Test accuracy:  0.800000011920929
Epoch 1/20
8/8 [==============================] - 11s 1s/step - loss: 2.3480 - accuracy: 0.0731 - val_loss: 2.1025 - val_accuracy: 0.1875

Epoch 00001: val_accuracy improved from -inf to 0.18750, saving model to models\best_model_esc10_exp_1_18
Epoch 2/20
8/8 [==============================] - 11s 1s/step - loss: 2.1248 - accuracy: 0.1679 - val_loss: 1.9367 - val_accuracy: 0.2969

Epoch 00002: val_accuracy improved from 0.18750 to 0.29688, saving model to models\best_model_esc10_exp_1_18
Epoch 3/20
8/8 [==============================] - 12s 1s/step - loss: 1.9334 - accuracy: 0.2672 - val_loss: 1.5550 - val_accuracy: 0.3906

Epoch 00003: val_accuracy improved from 0.29688 to 0.39062, saving model to models\best_model_esc10_exp_1_18
Epoch 4/20
8/8 [==============================] - 12s 1s/step - loss: 1.7968 - accuracy: 0.3248 - val_loss: 1.3633 - val_accuracy: 0.6250

Epoch 00004: val_accuracy improved from 0.39062 to 0.62500, saving model to models\best_model_esc10_exp_1_18
Epoch 5/20
8/8 [==============================] - 12s 1s/step - loss: 1.5519 - accuracy: 0.3934 - val_loss: 1.1807 - val_accuracy: 0.5938

Epoch 00005: val_accuracy did not improve from 0.62500
Epoch 6/20
8/8 [==============================] - 12s 1s/step - loss: 1.3415 - accuracy: 0.5166 - val_loss: 1.0283 - val_accuracy: 0.5938

Epoch 00006: val_accuracy did not improve from 0.62500
Epoch 7/20
8/8 [==============================] - 12s 1s/step - loss: 1.3425 - accuracy: 0.5217 - val_loss: 0.8722 - val_accuracy: 0.7031

Epoch 00007: val_accuracy improved from 0.62500 to 0.70312, saving model to models\best_model_esc10_exp_1_18
Epoch 8/20
8/8 [==============================] - 12s 1s/step - loss: 1.0735 - accuracy: 0.6146 - val_loss: 0.7621 - val_accuracy: 0.7656

Epoch 00008: val_accuracy improved from 0.70312 to 0.76562, saving model to models\best_model_esc10_exp_1_18
Epoch 9/20
8/8 [==============================] - 11s 1s/step - loss: 1.0037 - accuracy: 0.6392 - val_loss: 0.8264 - val_accuracy: 0.6719

Epoch 00009: val_accuracy did not improve from 0.76562
Epoch 10/20
8/8 [==============================] - 12s 1s/step - loss: 0.8859 - accuracy: 0.6629 - val_loss: 0.6680 - val_accuracy: 0.7188

Epoch 00010: val_accuracy did not improve from 0.76562
Epoch 11/20
8/8 [==============================] - 12s 1s/step - loss: 0.7235 - accuracy: 0.7309 - val_loss: 0.5475 - val_accuracy: 0.7812

Epoch 00011: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_1_18
Epoch 12/20
8/8 [==============================] - 11s 1s/step - loss: 0.7626 - accuracy: 0.7189 - val_loss: 0.6370 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.78125
Epoch 13/20
8/8 [==============================] - 12s 1s/step - loss: 0.6317 - accuracy: 0.7842 - val_loss: 0.4867 - val_accuracy: 0.8125

Epoch 00013: val_accuracy improved from 0.78125 to 0.81250, saving model to models\best_model_esc10_exp_1_18
Epoch 14/20
8/8 [==============================] - 12s 1s/step - loss: 0.5834 - accuracy: 0.7796 - val_loss: 0.5403 - val_accuracy: 0.8125

Epoch 00014: val_accuracy did not improve from 0.81250
Epoch 15/20
8/8 [==============================] - 12s 1s/step - loss: 0.6387 - accuracy: 0.7795 - val_loss: 0.4989 - val_accuracy: 0.8281

Epoch 00015: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_1_18
Epoch 16/20
8/8 [==============================] - 11s 1s/step - loss: 0.5480 - accuracy: 0.8001 - val_loss: 0.5200 - val_accuracy: 0.8594

Epoch 00016: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_1_18
Epoch 17/20
8/8 [==============================] - 11s 1s/step - loss: 0.5518 - accuracy: 0.7778 - val_loss: 0.4774 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.85938
Epoch 18/20
8/8 [==============================] - 11s 1s/step - loss: 0.4833 - accuracy: 0.8376 - val_loss: 0.5828 - val_accuracy: 0.7656

Epoch 00018: val_accuracy did not improve from 0.85938
Epoch 19/20
8/8 [==============================] - 10s 1s/step - loss: 0.3050 - accuracy: 0.9026 - val_loss: 0.4935 - val_accuracy: 0.8750

Epoch 00019: val_accuracy improved from 0.85938 to 0.87500, saving model to models\best_model_esc10_exp_1_18
Epoch 20/20
8/8 [==============================] - 11s 1s/step - loss: 0.4068 - accuracy: 0.8604 - val_loss: 0.5270 - val_accuracy: 0.8594

Epoch 00020: val_accuracy did not improve from 0.87500
Test accuracy:  0.887499988079071
Epoch 1/20
8/8 [==============================] - 13s 2s/step - loss: 2.2328 - accuracy: 0.1779 - val_loss: 1.8756 - val_accuracy: 0.2656

Epoch 00001: val_accuracy improved from -inf to 0.26562, saving model to models\best_model_esc10_exp_1_19
Epoch 2/20
8/8 [==============================] - 11s 1s/step - loss: 2.0443 - accuracy: 0.2530 - val_loss: 1.7951 - val_accuracy: 0.2969

Epoch 00002: val_accuracy improved from 0.26562 to 0.29688, saving model to models\best_model_esc10_exp_1_19
Epoch 3/20
8/8 [==============================] - 12s 2s/step - loss: 1.8246 - accuracy: 0.3469 - val_loss: 1.4242 - val_accuracy: 0.5781

Epoch 00003: val_accuracy improved from 0.29688 to 0.57812, saving model to models\best_model_esc10_exp_1_19
Epoch 4/20
8/8 [==============================] - 12s 1s/step - loss: 1.6225 - accuracy: 0.3866 - val_loss: 1.2346 - val_accuracy: 0.5625

Epoch 00004: val_accuracy did not improve from 0.57812
Epoch 5/20
8/8 [==============================] - 12s 1s/step - loss: 1.4228 - accuracy: 0.4780 - val_loss: 1.1132 - val_accuracy: 0.6094

Epoch 00005: val_accuracy improved from 0.57812 to 0.60938, saving model to models\best_model_esc10_exp_1_19
Epoch 6/20
8/8 [==============================] - 12s 2s/step - loss: 1.2698 - accuracy: 0.5291 - val_loss: 0.9403 - val_accuracy: 0.7188

Epoch 00006: val_accuracy improved from 0.60938 to 0.71875, saving model to models\best_model_esc10_exp_1_19
Epoch 7/20
8/8 [==============================] - 12s 1s/step - loss: 1.1429 - accuracy: 0.5603 - val_loss: 0.7874 - val_accuracy: 0.7656

Epoch 00007: val_accuracy improved from 0.71875 to 0.76562, saving model to models\best_model_esc10_exp_1_19
Epoch 8/20
8/8 [==============================] - 12s 1s/step - loss: 1.0304 - accuracy: 0.5988 - val_loss: 0.7574 - val_accuracy: 0.7344

Epoch 00008: val_accuracy did not improve from 0.76562
Epoch 9/20
8/8 [==============================] - 12s 1s/step - loss: 0.8913 - accuracy: 0.6715 - val_loss: 0.6635 - val_accuracy: 0.7812

Epoch 00009: val_accuracy improved from 0.76562 to 0.78125, saving model to models\best_model_esc10_exp_1_19
Epoch 10/20
8/8 [==============================] - 11s 1s/step - loss: 0.7545 - accuracy: 0.7155 - val_loss: 0.6637 - val_accuracy: 0.7969

Epoch 00010: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_1_19
Epoch 11/20
8/8 [==============================] - 11s 1s/step - loss: 0.6283 - accuracy: 0.7756 - val_loss: 0.7924 - val_accuracy: 0.7500

Epoch 00011: val_accuracy did not improve from 0.79688
Epoch 12/20
8/8 [==============================] - 11s 1s/step - loss: 0.6375 - accuracy: 0.7783 - val_loss: 0.5814 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.79688
Epoch 13/20
8/8 [==============================] - 12s 1s/step - loss: 0.6186 - accuracy: 0.7661 - val_loss: 0.5740 - val_accuracy: 0.7812

Epoch 00013: val_accuracy did not improve from 0.79688
Epoch 14/20
8/8 [==============================] - 12s 2s/step - loss: 0.5391 - accuracy: 0.8247 - val_loss: 0.7583 - val_accuracy: 0.8125

Epoch 00014: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_1_19
Epoch 15/20
8/8 [==============================] - 12s 1s/step - loss: 0.4276 - accuracy: 0.8495 - val_loss: 0.5338 - val_accuracy: 0.8125

Epoch 00015: val_accuracy did not improve from 0.81250
Epoch 16/20
8/8 [==============================] - 12s 2s/step - loss: 0.4277 - accuracy: 0.8629 - val_loss: 0.7337 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
8/8 [==============================] - 11s 1s/step - loss: 0.4017 - accuracy: 0.8508 - val_loss: 0.7444 - val_accuracy: 0.7969

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
8/8 [==============================] - 11s 1s/step - loss: 0.3488 - accuracy: 0.8815 - val_loss: 0.6761 - val_accuracy: 0.7812

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
8/8 [==============================] - 11s 1s/step - loss: 0.3650 - accuracy: 0.8551 - val_loss: 0.6495 - val_accuracy: 0.7969

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
8/8 [==============================] - 12s 1s/step - loss: 0.3299 - accuracy: 0.8756 - val_loss: 0.5330 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.81250
Test accuracy:  0.800000011920929
{'n_augmentation_per_train': 5, 'p_per_augmentation': 0.5}
100%|██████████| 256/256 [05:37<00:00,  1.32s/it]
Shape after augmentation:  (1536, 128, 431, 1) (1536, 10) (64, 128, 431, 1) (80, 128, 431, 1)
{'n_filters_l1': 64, 'n_filters_l2': 32, 'n_filters_l3': 32, 'n_dense_layer': 150, 'batch_size': 64, 'epochs': 20}
Epoch 1/20
24/24 [==============================] - 36s 2s/step - loss: 2.2164 - accuracy: 0.1761 - val_loss: 1.5723 - val_accuracy: 0.3281

Epoch 00001: val_accuracy improved from -inf to 0.32812, saving model to models\best_model_esc10_exp_2_0
Epoch 2/20
24/24 [==============================] - 44s 2s/step - loss: 1.6828 - accuracy: 0.3618 - val_loss: 1.0731 - val_accuracy: 0.6250

Epoch 00002: val_accuracy improved from 0.32812 to 0.62500, saving model to models\best_model_esc10_exp_2_0
Epoch 3/20
24/24 [==============================] - 48s 2s/step - loss: 1.3703 - accuracy: 0.4810 - val_loss: 0.8188 - val_accuracy: 0.6406

Epoch 00003: val_accuracy improved from 0.62500 to 0.64062, saving model to models\best_model_esc10_exp_2_0
Epoch 4/20
24/24 [==============================] - 48s 2s/step - loss: 1.1386 - accuracy: 0.5694 - val_loss: 0.6381 - val_accuracy: 0.7656

Epoch 00004: val_accuracy improved from 0.64062 to 0.76562, saving model to models\best_model_esc10_exp_2_0
Epoch 5/20
24/24 [==============================] - 48s 2s/step - loss: 0.9025 - accuracy: 0.6627 - val_loss: 0.6410 - val_accuracy: 0.8125

Epoch 00005: val_accuracy improved from 0.76562 to 0.81250, saving model to models\best_model_esc10_exp_2_0
Epoch 6/20
24/24 [==============================] - 47s 2s/step - loss: 0.7508 - accuracy: 0.7409 - val_loss: 0.5803 - val_accuracy: 0.7969

Epoch 00006: val_accuracy did not improve from 0.81250
Epoch 7/20
24/24 [==============================] - 46s 2s/step - loss: 0.6758 - accuracy: 0.7625 - val_loss: 0.6854 - val_accuracy: 0.7500

Epoch 00007: val_accuracy did not improve from 0.81250
Epoch 8/20
24/24 [==============================] - 43s 2s/step - loss: 0.5953 - accuracy: 0.7894 - val_loss: 0.7839 - val_accuracy: 0.7344

Epoch 00008: val_accuracy did not improve from 0.81250
Epoch 9/20
24/24 [==============================] - 48s 2s/step - loss: 0.5387 - accuracy: 0.8165 - val_loss: 0.5732 - val_accuracy: 0.8125

Epoch 00009: val_accuracy did not improve from 0.81250
Epoch 10/20
24/24 [==============================] - 48s 2s/step - loss: 0.4701 - accuracy: 0.8434 - val_loss: 0.6384 - val_accuracy: 0.8125

Epoch 00010: val_accuracy did not improve from 0.81250
Epoch 11/20
24/24 [==============================] - 45s 2s/step - loss: 0.4176 - accuracy: 0.8515 - val_loss: 0.7619 - val_accuracy: 0.7969

Epoch 00011: val_accuracy did not improve from 0.81250
Epoch 12/20
24/24 [==============================] - 47s 2s/step - loss: 0.4101 - accuracy: 0.8644 - val_loss: 0.6201 - val_accuracy: 0.8438

Epoch 00012: val_accuracy improved from 0.81250 to 0.84375, saving model to models\best_model_esc10_exp_2_0
Epoch 13/20
24/24 [==============================] - 48s 2s/step - loss: 0.3949 - accuracy: 0.8552 - val_loss: 0.7782 - val_accuracy: 0.8281

Epoch 00013: val_accuracy did not improve from 0.84375
Epoch 14/20
24/24 [==============================] - 46s 2s/step - loss: 0.3907 - accuracy: 0.8677 - val_loss: 0.5475 - val_accuracy: 0.8594

Epoch 00014: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_0
Epoch 15/20
24/24 [==============================] - 47s 2s/step - loss: 0.2670 - accuracy: 0.9131 - val_loss: 0.8107 - val_accuracy: 0.8125

Epoch 00015: val_accuracy did not improve from 0.85938
Epoch 16/20
24/24 [==============================] - 48s 2s/step - loss: 0.2551 - accuracy: 0.9153 - val_loss: 0.7924 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.85938
Epoch 17/20
24/24 [==============================] - 44s 2s/step - loss: 0.2669 - accuracy: 0.9032 - val_loss: 0.8595 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.85938
Epoch 18/20
24/24 [==============================] - 48s 2s/step - loss: 0.2594 - accuracy: 0.9101 - val_loss: 0.9014 - val_accuracy: 0.8438

Epoch 00018: val_accuracy did not improve from 0.85938
Epoch 19/20
24/24 [==============================] - 47s 2s/step - loss: 0.2851 - accuracy: 0.9060 - val_loss: 0.7576 - val_accuracy: 0.8281

Epoch 00019: val_accuracy did not improve from 0.85938
Epoch 20/20
24/24 [==============================] - 43s 2s/step - loss: 0.2563 - accuracy: 0.9255 - val_loss: 0.7187 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.85938
Test accuracy:  0.8500000238418579
Epoch 1/20
24/24 [==============================] - 51s 2s/step - loss: 2.2634 - accuracy: 0.1455 - val_loss: 1.6419 - val_accuracy: 0.3906

Epoch 00001: val_accuracy improved from -inf to 0.39062, saving model to models\best_model_esc10_exp_2_1
Epoch 2/20
24/24 [==============================] - 52s 2s/step - loss: 1.7994 - accuracy: 0.3159 - val_loss: 1.1551 - val_accuracy: 0.5469

Epoch 00002: val_accuracy improved from 0.39062 to 0.54688, saving model to models\best_model_esc10_exp_2_1
Epoch 3/20
24/24 [==============================] - 47s 2s/step - loss: 1.3576 - accuracy: 0.4838 - val_loss: 0.7857 - val_accuracy: 0.7031

Epoch 00003: val_accuracy improved from 0.54688 to 0.70312, saving model to models\best_model_esc10_exp_2_1
Epoch 4/20
24/24 [==============================] - 51s 2s/step - loss: 1.0074 - accuracy: 0.6268 - val_loss: 0.7720 - val_accuracy: 0.7500

Epoch 00004: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_2_1
Epoch 5/20
24/24 [==============================] - 48s 2s/step - loss: 0.8094 - accuracy: 0.6981 - val_loss: 0.6673 - val_accuracy: 0.8125

Epoch 00005: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_2_1
Epoch 6/20
24/24 [==============================] - 49s 2s/step - loss: 0.6643 - accuracy: 0.7633 - val_loss: 0.6653 - val_accuracy: 0.7656

Epoch 00006: val_accuracy did not improve from 0.81250
Epoch 7/20
24/24 [==============================] - 50s 2s/step - loss: 0.6287 - accuracy: 0.7646 - val_loss: 0.8232 - val_accuracy: 0.7500

Epoch 00007: val_accuracy did not improve from 0.81250
Epoch 8/20
24/24 [==============================] - 49s 2s/step - loss: 0.5516 - accuracy: 0.7901 - val_loss: 0.7441 - val_accuracy: 0.7656

Epoch 00008: val_accuracy did not improve from 0.81250
Epoch 9/20
24/24 [==============================] - 47s 2s/step - loss: 0.5693 - accuracy: 0.7894 - val_loss: 0.7716 - val_accuracy: 0.7812

Epoch 00009: val_accuracy did not improve from 0.81250
Epoch 10/20
24/24 [==============================] - 51s 2s/step - loss: 0.4440 - accuracy: 0.8463 - val_loss: 0.6620 - val_accuracy: 0.7656

Epoch 00010: val_accuracy did not improve from 0.81250
Epoch 11/20
24/24 [==============================] - 50s 2s/step - loss: 0.4017 - accuracy: 0.8573 - val_loss: 0.8273 - val_accuracy: 0.7656

Epoch 00011: val_accuracy did not improve from 0.81250
Epoch 12/20
24/24 [==============================] - 48s 2s/step - loss: 0.4246 - accuracy: 0.8458 - val_loss: 0.8067 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.81250
Epoch 13/20
24/24 [==============================] - 50s 2s/step - loss: 0.3772 - accuracy: 0.8613 - val_loss: 0.6598 - val_accuracy: 0.7969

Epoch 00013: val_accuracy did not improve from 0.81250
Epoch 14/20
24/24 [==============================] - 47s 2s/step - loss: 0.3579 - accuracy: 0.8695 - val_loss: 0.7500 - val_accuracy: 0.8125

Epoch 00014: val_accuracy did not improve from 0.81250
Epoch 15/20
24/24 [==============================] - 51s 2s/step - loss: 0.3550 - accuracy: 0.8720 - val_loss: 0.7437 - val_accuracy: 0.7656

Epoch 00015: val_accuracy did not improve from 0.81250
Epoch 16/20
24/24 [==============================] - 47s 2s/step - loss: 0.3086 - accuracy: 0.8842 - val_loss: 0.9173 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.81250
Epoch 17/20
24/24 [==============================] - 48s 2s/step - loss: 0.3142 - accuracy: 0.8886 - val_loss: 1.1549 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.81250
Epoch 18/20
24/24 [==============================] - 51s 2s/step - loss: 0.2969 - accuracy: 0.8856 - val_loss: 0.8835 - val_accuracy: 0.7969

Epoch 00018: val_accuracy did not improve from 0.81250
Epoch 19/20
24/24 [==============================] - 48s 2s/step - loss: 0.2100 - accuracy: 0.9275 - val_loss: 0.9321 - val_accuracy: 0.8125

Epoch 00019: val_accuracy did not improve from 0.81250
Epoch 20/20
24/24 [==============================] - 49s 2s/step - loss: 0.1893 - accuracy: 0.9371 - val_loss: 0.8658 - val_accuracy: 0.8125

Epoch 00020: val_accuracy did not improve from 0.81250
Test accuracy:  0.762499988079071
Epoch 1/20
24/24 [==============================] - 50s 2s/step - loss: 2.2253 - accuracy: 0.1678 - val_loss: 1.5274 - val_accuracy: 0.3438

Epoch 00001: val_accuracy improved from -inf to 0.34375, saving model to models\best_model_esc10_exp_2_2
Epoch 2/20
24/24 [==============================] - 48s 2s/step - loss: 1.7480 - accuracy: 0.3520 - val_loss: 0.9992 - val_accuracy: 0.7188

Epoch 00002: val_accuracy improved from 0.34375 to 0.71875, saving model to models\best_model_esc10_exp_2_2
Epoch 3/20
24/24 [==============================] - 53s 2s/step - loss: 1.2486 - accuracy: 0.5389 - val_loss: 0.6631 - val_accuracy: 0.7031

Epoch 00003: val_accuracy did not improve from 0.71875
Epoch 4/20
24/24 [==============================] - 46s 2s/step - loss: 1.0048 - accuracy: 0.6456 - val_loss: 0.5914 - val_accuracy: 0.7969

Epoch 00004: val_accuracy improved from 0.71875 to 0.79688, saving model to models\best_model_esc10_exp_2_2
Epoch 5/20
24/24 [==============================] - 50s 2s/step - loss: 0.8375 - accuracy: 0.7112 - val_loss: 0.5645 - val_accuracy: 0.7500

Epoch 00005: val_accuracy did not improve from 0.79688
Epoch 6/20
24/24 [==============================] - 51s 2s/step - loss: 0.6690 - accuracy: 0.7577 - val_loss: 0.4659 - val_accuracy: 0.8281

Epoch 00006: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_2_2
Epoch 7/20
24/24 [==============================] - 47s 2s/step - loss: 0.6637 - accuracy: 0.7548 - val_loss: 0.5094 - val_accuracy: 0.8125

Epoch 00007: val_accuracy did not improve from 0.82812
Epoch 8/20
24/24 [==============================] - 51s 2s/step - loss: 0.5618 - accuracy: 0.8011 - val_loss: 0.4190 - val_accuracy: 0.8750

Epoch 00008: val_accuracy improved from 0.82812 to 0.87500, saving model to models\best_model_esc10_exp_2_2
Epoch 9/20
24/24 [==============================] - 49s 2s/step - loss: 0.5319 - accuracy: 0.8113 - val_loss: 0.4697 - val_accuracy: 0.8438

Epoch 00009: val_accuracy did not improve from 0.87500
Epoch 10/20
24/24 [==============================] - 47s 2s/step - loss: 0.4585 - accuracy: 0.8397 - val_loss: 0.5418 - val_accuracy: 0.7656

Epoch 00010: val_accuracy did not improve from 0.87500
Epoch 11/20
24/24 [==============================] - 51s 2s/step - loss: 0.5399 - accuracy: 0.8075 - val_loss: 0.4640 - val_accuracy: 0.8438

Epoch 00011: val_accuracy did not improve from 0.87500
Epoch 12/20
24/24 [==============================] - 51s 2s/step - loss: 0.4464 - accuracy: 0.8502 - val_loss: 0.4739 - val_accuracy: 0.8906

Epoch 00012: val_accuracy improved from 0.87500 to 0.89062, saving model to models\best_model_esc10_exp_2_2
Epoch 13/20
24/24 [==============================] - 46s 2s/step - loss: 0.3112 - accuracy: 0.8886 - val_loss: 0.5757 - val_accuracy: 0.8125

Epoch 00013: val_accuracy did not improve from 0.89062
Epoch 14/20
24/24 [==============================] - 50s 2s/step - loss: 0.3000 - accuracy: 0.8867 - val_loss: 0.4754 - val_accuracy: 0.8281

Epoch 00014: val_accuracy did not improve from 0.89062
Epoch 15/20
24/24 [==============================] - 51s 2s/step - loss: 0.3205 - accuracy: 0.8846 - val_loss: 0.6194 - val_accuracy: 0.8281

Epoch 00015: val_accuracy did not improve from 0.89062
Epoch 16/20
24/24 [==============================] - 49s 2s/step - loss: 0.3223 - accuracy: 0.8797 - val_loss: 0.5242 - val_accuracy: 0.8750

Epoch 00016: val_accuracy did not improve from 0.89062
Epoch 17/20
24/24 [==============================] - 49s 2s/step - loss: 0.2598 - accuracy: 0.9043 - val_loss: 0.4833 - val_accuracy: 0.9062

Epoch 00017: val_accuracy improved from 0.89062 to 0.90625, saving model to models\best_model_esc10_exp_2_2
Epoch 18/20
24/24 [==============================] - 52s 2s/step - loss: 0.2264 - accuracy: 0.9173 - val_loss: 0.4306 - val_accuracy: 0.8594

Epoch 00018: val_accuracy did not improve from 0.90625
Epoch 19/20
24/24 [==============================] - 49s 2s/step - loss: 0.2492 - accuracy: 0.9247 - val_loss: 0.6275 - val_accuracy: 0.8594

Epoch 00019: val_accuracy did not improve from 0.90625
Epoch 20/20
24/24 [==============================] - 49s 2s/step - loss: 0.1975 - accuracy: 0.9261 - val_loss: 0.5819 - val_accuracy: 0.8594

Epoch 00020: val_accuracy did not improve from 0.90625
Test accuracy:  0.887499988079071
Epoch 1/20
24/24 [==============================] - 51s 2s/step - loss: 2.1924 - accuracy: 0.1882 - val_loss: 1.5500 - val_accuracy: 0.4219

Epoch 00001: val_accuracy improved from -inf to 0.42188, saving model to models\best_model_esc10_exp_2_3
Epoch 2/20
24/24 [==============================] - 46s 2s/step - loss: 1.6988 - accuracy: 0.3403 - val_loss: 1.1966 - val_accuracy: 0.5781

Epoch 00002: val_accuracy improved from 0.42188 to 0.57812, saving model to models\best_model_esc10_exp_2_3
Epoch 3/20
24/24 [==============================] - 49s 2s/step - loss: 1.3852 - accuracy: 0.4753 - val_loss: 0.8062 - val_accuracy: 0.6719

Epoch 00003: val_accuracy improved from 0.57812 to 0.67188, saving model to models\best_model_esc10_exp_2_3
Epoch 4/20
24/24 [==============================] - 52s 2s/step - loss: 1.0674 - accuracy: 0.6046 - val_loss: 0.6269 - val_accuracy: 0.7500

Epoch 00004: val_accuracy improved from 0.67188 to 0.75000, saving model to models\best_model_esc10_exp_2_3
Epoch 5/20
24/24 [==============================] - 51s 2s/step - loss: 0.9349 - accuracy: 0.6579 - val_loss: 0.6882 - val_accuracy: 0.7500

Epoch 00005: val_accuracy did not improve from 0.75000
Epoch 6/20
24/24 [==============================] - 51s 2s/step - loss: 0.8603 - accuracy: 0.6827 - val_loss: 0.6720 - val_accuracy: 0.7031

Epoch 00006: val_accuracy did not improve from 0.75000
Epoch 7/20
24/24 [==============================] - 51s 2s/step - loss: 0.7329 - accuracy: 0.7298 - val_loss: 0.6051 - val_accuracy: 0.7500

Epoch 00007: val_accuracy did not improve from 0.75000
Epoch 8/20
24/24 [==============================] - 52s 2s/step - loss: 0.6063 - accuracy: 0.7750 - val_loss: 0.5398 - val_accuracy: 0.7969

Epoch 00008: val_accuracy improved from 0.75000 to 0.79688, saving model to models\best_model_esc10_exp_2_3
Epoch 9/20
24/24 [==============================] - 41s 2s/step - loss: 0.5360 - accuracy: 0.8091 - val_loss: 0.5697 - val_accuracy: 0.8594

Epoch 00009: val_accuracy improved from 0.79688 to 0.85938, saving model to models\best_model_esc10_exp_2_3
Epoch 10/20
24/24 [==============================] - 50s 2s/step - loss: 0.4672 - accuracy: 0.8514 - val_loss: 0.6092 - val_accuracy: 0.8281

Epoch 00010: val_accuracy did not improve from 0.85938
Epoch 11/20
24/24 [==============================] - 52s 2s/step - loss: 0.4664 - accuracy: 0.8330 - val_loss: 0.6270 - val_accuracy: 0.7969

Epoch 00011: val_accuracy did not improve from 0.85938
Epoch 12/20
24/24 [==============================] - 51s 2s/step - loss: 0.3876 - accuracy: 0.8595 - val_loss: 0.6319 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.85938
Epoch 13/20
24/24 [==============================] - 51s 2s/step - loss: 0.3169 - accuracy: 0.8924 - val_loss: 0.4693 - val_accuracy: 0.8281

Epoch 00013: val_accuracy did not improve from 0.85938
Epoch 14/20
24/24 [==============================] - 48s 2s/step - loss: 0.3417 - accuracy: 0.8928 - val_loss: 0.7004 - val_accuracy: 0.8281

Epoch 00014: val_accuracy did not improve from 0.85938
Epoch 15/20
24/24 [==============================] - 52s 2s/step - loss: 0.3597 - accuracy: 0.8709 - val_loss: 0.5838 - val_accuracy: 0.8281

Epoch 00015: val_accuracy did not improve from 0.85938
Epoch 16/20
24/24 [==============================] - 50s 2s/step - loss: 0.3025 - accuracy: 0.8990 - val_loss: 0.6694 - val_accuracy: 0.8125

Epoch 00016: val_accuracy did not improve from 0.85938
Epoch 17/20
24/24 [==============================] - 50s 2s/step - loss: 0.2543 - accuracy: 0.8995 - val_loss: 0.5858 - val_accuracy: 0.7969

Epoch 00017: val_accuracy did not improve from 0.85938
Epoch 18/20
24/24 [==============================] - 51s 2s/step - loss: 0.2581 - accuracy: 0.9073 - val_loss: 0.5297 - val_accuracy: 0.8906

Epoch 00018: val_accuracy improved from 0.85938 to 0.89062, saving model to models\best_model_esc10_exp_2_3
Epoch 19/20
24/24 [==============================] - 47s 2s/step - loss: 0.1987 - accuracy: 0.9342 - val_loss: 0.5464 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
24/24 [==============================] - 51s 2s/step - loss: 0.2784 - accuracy: 0.9122 - val_loss: 0.7255 - val_accuracy: 0.8750

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.887499988079071
Epoch 1/20
24/24 [==============================] - 49s 2s/step - loss: 2.1693 - accuracy: 0.1795 - val_loss: 1.4118 - val_accuracy: 0.5312

Epoch 00001: val_accuracy improved from -inf to 0.53125, saving model to models\best_model_esc10_exp_2_4
Epoch 2/20
24/24 [==============================] - 53s 2s/step - loss: 1.6816 - accuracy: 0.3592 - val_loss: 1.0087 - val_accuracy: 0.6406

Epoch 00002: val_accuracy improved from 0.53125 to 0.64062, saving model to models\best_model_esc10_exp_2_4
Epoch 3/20
24/24 [==============================] - 52s 2s/step - loss: 1.3017 - accuracy: 0.5195 - val_loss: 0.8621 - val_accuracy: 0.7031

Epoch 00003: val_accuracy improved from 0.64062 to 0.70312, saving model to models\best_model_esc10_exp_2_4
Epoch 4/20
24/24 [==============================] - 49s 2s/step - loss: 1.0756 - accuracy: 0.6108 - val_loss: 0.6664 - val_accuracy: 0.7500

Epoch 00004: val_accuracy improved from 0.70312 to 0.75000, saving model to models\best_model_esc10_exp_2_4
Epoch 5/20
24/24 [==============================] - 52s 2s/step - loss: 0.8927 - accuracy: 0.6741 - val_loss: 0.6054 - val_accuracy: 0.7656

Epoch 00005: val_accuracy improved from 0.75000 to 0.76562, saving model to models\best_model_esc10_exp_2_4
Epoch 6/20
24/24 [==============================] - 50s 2s/step - loss: 0.7702 - accuracy: 0.7312 - val_loss: 0.5912 - val_accuracy: 0.8125

Epoch 00006: val_accuracy improved from 0.76562 to 0.81250, saving model to models\best_model_esc10_exp_2_4
Epoch 7/20
24/24 [==============================] - 50s 2s/step - loss: 0.6373 - accuracy: 0.7665 - val_loss: 0.5360 - val_accuracy: 0.8281

Epoch 00007: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_2_4
Epoch 8/20
24/24 [==============================] - 52s 2s/step - loss: 0.5707 - accuracy: 0.7828 - val_loss: 0.6763 - val_accuracy: 0.7812

Epoch 00008: val_accuracy did not improve from 0.82812
Epoch 9/20
24/24 [==============================] - 46s 2s/step - loss: 0.5339 - accuracy: 0.8102 - val_loss: 0.4847 - val_accuracy: 0.8438

Epoch 00009: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_2_4
Epoch 10/20
24/24 [==============================] - 52s 2s/step - loss: 0.4570 - accuracy: 0.8399 - val_loss: 0.5748 - val_accuracy: 0.7969

Epoch 00010: val_accuracy did not improve from 0.84375
Epoch 11/20
24/24 [==============================] - 51s 2s/step - loss: 0.3722 - accuracy: 0.8705 - val_loss: 0.5811 - val_accuracy: 0.8281

Epoch 00011: val_accuracy did not improve from 0.84375
Epoch 12/20
24/24 [==============================] - 50s 2s/step - loss: 0.3552 - accuracy: 0.8689 - val_loss: 0.6122 - val_accuracy: 0.8438

Epoch 00012: val_accuracy did not improve from 0.84375
Epoch 13/20
24/24 [==============================] - 52s 2s/step - loss: 0.2901 - accuracy: 0.9099 - val_loss: 0.6829 - val_accuracy: 0.7500

Epoch 00013: val_accuracy did not improve from 0.84375
Epoch 14/20
24/24 [==============================] - 46s 2s/step - loss: 0.3442 - accuracy: 0.8844 - val_loss: 0.5365 - val_accuracy: 0.8281

Epoch 00014: val_accuracy did not improve from 0.84375
Epoch 15/20
24/24 [==============================] - 52s 2s/step - loss: 0.3072 - accuracy: 0.8941 - val_loss: 0.6290 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
24/24 [==============================] - 52s 2s/step - loss: 0.2810 - accuracy: 0.9181 - val_loss: 0.6116 - val_accuracy: 0.8281

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
24/24 [==============================] - 48s 2s/step - loss: 0.2552 - accuracy: 0.8980 - val_loss: 0.7286 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
24/24 [==============================] - 50s 2s/step - loss: 0.2155 - accuracy: 0.9213 - val_loss: 0.9512 - val_accuracy: 0.8125

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
24/24 [==============================] - 52s 2s/step - loss: 0.2094 - accuracy: 0.9243 - val_loss: 0.7325 - val_accuracy: 0.7656

Epoch 00019: val_accuracy did not improve from 0.84375
Epoch 20/20
24/24 [==============================] - 49s 2s/step - loss: 0.2293 - accuracy: 0.9073 - val_loss: 0.6657 - val_accuracy: 0.8438

Epoch 00020: val_accuracy did not improve from 0.84375
Test accuracy:  0.875
Epoch 1/20
24/24 [==============================] - 38s 2s/step - loss: 2.2057 - accuracy: 0.1525 - val_loss: 1.4488 - val_accuracy: 0.4531

Epoch 00001: val_accuracy improved from -inf to 0.45312, saving model to models\best_model_esc10_exp_2_5
Epoch 2/20
24/24 [==============================] - 41s 2s/step - loss: 1.7053 - accuracy: 0.3488 - val_loss: 1.0038 - val_accuracy: 0.5938

Epoch 00002: val_accuracy improved from 0.45312 to 0.59375, saving model to models\best_model_esc10_exp_2_5
Epoch 3/20
24/24 [==============================] - 40s 2s/step - loss: 1.2477 - accuracy: 0.5336 - val_loss: 0.8008 - val_accuracy: 0.6562

Epoch 00003: val_accuracy improved from 0.59375 to 0.65625, saving model to models\best_model_esc10_exp_2_5
Epoch 4/20
24/24 [==============================] - 38s 2s/step - loss: 0.9557 - accuracy: 0.6703 - val_loss: 0.7176 - val_accuracy: 0.6875

Epoch 00004: val_accuracy improved from 0.65625 to 0.68750, saving model to models\best_model_esc10_exp_2_5
Epoch 5/20
24/24 [==============================] - 40s 2s/step - loss: 0.8630 - accuracy: 0.6697 - val_loss: 0.5641 - val_accuracy: 0.8438

Epoch 00005: val_accuracy improved from 0.68750 to 0.84375, saving model to models\best_model_esc10_exp_2_5
Epoch 6/20
24/24 [==============================] - 37s 2s/step - loss: 0.6993 - accuracy: 0.7380 - val_loss: 0.8701 - val_accuracy: 0.7344

Epoch 00006: val_accuracy did not improve from 0.84375
Epoch 7/20
24/24 [==============================] - 44s 2s/step - loss: 0.6864 - accuracy: 0.7459 - val_loss: 0.6738 - val_accuracy: 0.7656

Epoch 00007: val_accuracy did not improve from 0.84375
Epoch 8/20
24/24 [==============================] - 45s 2s/step - loss: 0.5620 - accuracy: 0.7916 - val_loss: 0.6033 - val_accuracy: 0.8438

Epoch 00008: val_accuracy did not improve from 0.84375
Epoch 9/20
24/24 [==============================] - 41s 2s/step - loss: 0.4653 - accuracy: 0.8176 - val_loss: 0.5830 - val_accuracy: 0.8594

Epoch 00009: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_5
Epoch 10/20
24/24 [==============================] - 38s 2s/step - loss: 0.4708 - accuracy: 0.8315 - val_loss: 0.5312 - val_accuracy: 0.7969

Epoch 00010: val_accuracy did not improve from 0.85938
Epoch 11/20
24/24 [==============================] - 46s 2s/step - loss: 0.3995 - accuracy: 0.8571 - val_loss: 0.5942 - val_accuracy: 0.8438

Epoch 00011: val_accuracy did not improve from 0.85938
Epoch 12/20
24/24 [==============================] - 42s 2s/step - loss: 0.3256 - accuracy: 0.8756 - val_loss: 0.9250 - val_accuracy: 0.8438

Epoch 00012: val_accuracy did not improve from 0.85938
Epoch 13/20
24/24 [==============================] - 37s 2s/step - loss: 0.2979 - accuracy: 0.8936 - val_loss: 0.7320 - val_accuracy: 0.8281

Epoch 00013: val_accuracy did not improve from 0.85938
Epoch 14/20
24/24 [==============================] - 41s 2s/step - loss: 0.3111 - accuracy: 0.8997 - val_loss: 0.7850 - val_accuracy: 0.8281

Epoch 00014: val_accuracy did not improve from 0.85938
Epoch 15/20
24/24 [==============================] - 40s 2s/step - loss: 0.3271 - accuracy: 0.8844 - val_loss: 0.6442 - val_accuracy: 0.8906

Epoch 00015: val_accuracy improved from 0.85938 to 0.89062, saving model to models\best_model_esc10_exp_2_5
Epoch 16/20
24/24 [==============================] - 40s 2s/step - loss: 0.2279 - accuracy: 0.9141 - val_loss: 0.6850 - val_accuracy: 0.8906

Epoch 00016: val_accuracy did not improve from 0.89062
Epoch 17/20
24/24 [==============================] - 40s 2s/step - loss: 0.2460 - accuracy: 0.9155 - val_loss: 0.6261 - val_accuracy: 0.8281

Epoch 00017: val_accuracy did not improve from 0.89062
Epoch 18/20
24/24 [==============================] - 40s 2s/step - loss: 0.2204 - accuracy: 0.9181 - val_loss: 0.7678 - val_accuracy: 0.8594

Epoch 00018: val_accuracy did not improve from 0.89062
Epoch 19/20
24/24 [==============================] - 41s 2s/step - loss: 0.2637 - accuracy: 0.9067 - val_loss: 0.5547 - val_accuracy: 0.8750

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
24/24 [==============================] - 40s 2s/step - loss: 0.2504 - accuracy: 0.9136 - val_loss: 0.7633 - val_accuracy: 0.8906

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.887499988079071
Epoch 1/20
24/24 [==============================] - 40s 2s/step - loss: 2.1794 - accuracy: 0.1791 - val_loss: 1.6281 - val_accuracy: 0.4062

Epoch 00001: val_accuracy improved from -inf to 0.40625, saving model to models\best_model_esc10_exp_2_6
Epoch 2/20
24/24 [==============================] - 39s 2s/step - loss: 1.7508 - accuracy: 0.3429 - val_loss: 1.2087 - val_accuracy: 0.5938

Epoch 00002: val_accuracy improved from 0.40625 to 0.59375, saving model to models\best_model_esc10_exp_2_6
Epoch 3/20
24/24 [==============================] - 39s 2s/step - loss: 1.3929 - accuracy: 0.4851 - val_loss: 0.7927 - val_accuracy: 0.6406

Epoch 00003: val_accuracy improved from 0.59375 to 0.64062, saving model to models\best_model_esc10_exp_2_6
Epoch 4/20
24/24 [==============================] - 39s 2s/step - loss: 1.0318 - accuracy: 0.6159 - val_loss: 0.6739 - val_accuracy: 0.7656

Epoch 00004: val_accuracy improved from 0.64062 to 0.76562, saving model to models\best_model_esc10_exp_2_6
Epoch 5/20
24/24 [==============================] - 39s 2s/step - loss: 0.9368 - accuracy: 0.6641 - val_loss: 0.5898 - val_accuracy: 0.7656

Epoch 00005: val_accuracy did not improve from 0.76562
Epoch 6/20
24/24 [==============================] - 39s 2s/step - loss: 0.6910 - accuracy: 0.7589 - val_loss: 0.5756 - val_accuracy: 0.7969

Epoch 00006: val_accuracy improved from 0.76562 to 0.79688, saving model to models\best_model_esc10_exp_2_6
Epoch 7/20
24/24 [==============================] - 40s 2s/step - loss: 0.6442 - accuracy: 0.7652 - val_loss: 0.5352 - val_accuracy: 0.7969

Epoch 00007: val_accuracy did not improve from 0.79688
Epoch 8/20
24/24 [==============================] - 39s 2s/step - loss: 0.5844 - accuracy: 0.7826 - val_loss: 0.5926 - val_accuracy: 0.7969

Epoch 00008: val_accuracy did not improve from 0.79688
Epoch 9/20
24/24 [==============================] - 39s 2s/step - loss: 0.4728 - accuracy: 0.8389 - val_loss: 0.6603 - val_accuracy: 0.7969

Epoch 00009: val_accuracy did not improve from 0.79688
Epoch 10/20
24/24 [==============================] - 39s 2s/step - loss: 0.4820 - accuracy: 0.8183 - val_loss: 0.6508 - val_accuracy: 0.7969

Epoch 00010: val_accuracy did not improve from 0.79688
Epoch 11/20
24/24 [==============================] - 39s 2s/step - loss: 0.4298 - accuracy: 0.8423 - val_loss: 0.5629 - val_accuracy: 0.8438

Epoch 00011: val_accuracy improved from 0.79688 to 0.84375, saving model to models\best_model_esc10_exp_2_6
Epoch 12/20
24/24 [==============================] - 40s 2s/step - loss: 0.3515 - accuracy: 0.8771 - val_loss: 0.7141 - val_accuracy: 0.8438

Epoch 00012: val_accuracy did not improve from 0.84375
Epoch 13/20
24/24 [==============================] - 39s 2s/step - loss: 0.3241 - accuracy: 0.8903 - val_loss: 0.5655 - val_accuracy: 0.8594

Epoch 00013: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_6
Epoch 14/20
24/24 [==============================] - 39s 2s/step - loss: 0.2656 - accuracy: 0.9049 - val_loss: 0.6568 - val_accuracy: 0.8438

Epoch 00014: val_accuracy did not improve from 0.85938
Epoch 15/20
24/24 [==============================] - 39s 2s/step - loss: 0.2841 - accuracy: 0.9071 - val_loss: 0.6466 - val_accuracy: 0.8438

Epoch 00015: val_accuracy did not improve from 0.85938
Epoch 16/20
24/24 [==============================] - 39s 2s/step - loss: 0.2420 - accuracy: 0.9175 - val_loss: 0.6130 - val_accuracy: 0.8594

Epoch 00016: val_accuracy did not improve from 0.85938
Epoch 17/20
24/24 [==============================] - 39s 2s/step - loss: 0.2305 - accuracy: 0.9197 - val_loss: 0.6461 - val_accuracy: 0.8750

Epoch 00017: val_accuracy improved from 0.85938 to 0.87500, saving model to models\best_model_esc10_exp_2_6
Epoch 18/20
24/24 [==============================] - 39s 2s/step - loss: 0.2004 - accuracy: 0.9294 - val_loss: 0.6385 - val_accuracy: 0.8750

Epoch 00018: val_accuracy did not improve from 0.87500
Epoch 19/20
24/24 [==============================] - 39s 2s/step - loss: 0.2036 - accuracy: 0.9287 - val_loss: 0.6355 - val_accuracy: 0.8906

Epoch 00019: val_accuracy improved from 0.87500 to 0.89062, saving model to models\best_model_esc10_exp_2_6
Epoch 20/20
24/24 [==============================] - 39s 2s/step - loss: 0.1446 - accuracy: 0.9425 - val_loss: 0.7242 - val_accuracy: 0.8750

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.8500000238418579
Epoch 1/20
24/24 [==============================] - 41s 2s/step - loss: 2.2005 - accuracy: 0.1824 - val_loss: 1.5205 - val_accuracy: 0.4688

Epoch 00001: val_accuracy improved from -inf to 0.46875, saving model to models\best_model_esc10_exp_2_7
Epoch 2/20
24/24 [==============================] - 39s 2s/step - loss: 1.6900 - accuracy: 0.3608 - val_loss: 0.9033 - val_accuracy: 0.6562

Epoch 00002: val_accuracy improved from 0.46875 to 0.65625, saving model to models\best_model_esc10_exp_2_7
Epoch 3/20
24/24 [==============================] - 38s 2s/step - loss: 1.2013 - accuracy: 0.5566 - val_loss: 0.7335 - val_accuracy: 0.7344

Epoch 00003: val_accuracy improved from 0.65625 to 0.73438, saving model to models\best_model_esc10_exp_2_7
Epoch 4/20
24/24 [==============================] - 38s 2s/step - loss: 1.0040 - accuracy: 0.6188 - val_loss: 0.6951 - val_accuracy: 0.7656

Epoch 00004: val_accuracy improved from 0.73438 to 0.76562, saving model to models\best_model_esc10_exp_2_7
Epoch 5/20
24/24 [==============================] - 38s 2s/step - loss: 0.9107 - accuracy: 0.6610 - val_loss: 0.5257 - val_accuracy: 0.8125

Epoch 00005: val_accuracy improved from 0.76562 to 0.81250, saving model to models\best_model_esc10_exp_2_7
Epoch 6/20
24/24 [==============================] - 38s 2s/step - loss: 0.6916 - accuracy: 0.7484 - val_loss: 0.5869 - val_accuracy: 0.7500

Epoch 00006: val_accuracy did not improve from 0.81250
Epoch 7/20
24/24 [==============================] - 38s 2s/step - loss: 0.6421 - accuracy: 0.7704 - val_loss: 0.5547 - val_accuracy: 0.8281

Epoch 00007: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_2_7
Epoch 8/20
24/24 [==============================] - 35s 1s/step - loss: 0.5687 - accuracy: 0.7833 - val_loss: 0.7153 - val_accuracy: 0.7188

Epoch 00008: val_accuracy did not improve from 0.82812
Epoch 9/20
24/24 [==============================] - 30s 1s/step - loss: 0.4970 - accuracy: 0.8178 - val_loss: 0.5841 - val_accuracy: 0.7656

Epoch 00009: val_accuracy did not improve from 0.82812
Epoch 10/20
24/24 [==============================] - 39s 2s/step - loss: 0.5260 - accuracy: 0.8081 - val_loss: 0.6001 - val_accuracy: 0.8125

Epoch 00010: val_accuracy did not improve from 0.82812
Epoch 11/20
24/24 [==============================] - 38s 2s/step - loss: 0.4183 - accuracy: 0.8413 - val_loss: 0.6004 - val_accuracy: 0.7969

Epoch 00011: val_accuracy did not improve from 0.82812
Epoch 12/20
24/24 [==============================] - 38s 2s/step - loss: 0.4278 - accuracy: 0.8443 - val_loss: 0.5436 - val_accuracy: 0.8281

Epoch 00012: val_accuracy did not improve from 0.82812
Epoch 13/20
24/24 [==============================] - 38s 2s/step - loss: 0.3981 - accuracy: 0.8535 - val_loss: 0.6378 - val_accuracy: 0.8125

Epoch 00013: val_accuracy did not improve from 0.82812
Epoch 14/20
24/24 [==============================] - 37s 2s/step - loss: 0.2817 - accuracy: 0.9027 - val_loss: 0.6971 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.82812
Epoch 15/20
24/24 [==============================] - 36s 1s/step - loss: 0.2933 - accuracy: 0.8882 - val_loss: 0.7751 - val_accuracy: 0.8125

Epoch 00015: val_accuracy did not improve from 0.82812
Epoch 16/20
24/24 [==============================] - 38s 2s/step - loss: 0.2591 - accuracy: 0.9041 - val_loss: 0.5954 - val_accuracy: 0.8281

Epoch 00016: val_accuracy did not improve from 0.82812
Epoch 17/20
24/24 [==============================] - 38s 2s/step - loss: 0.2201 - accuracy: 0.9228 - val_loss: 0.5608 - val_accuracy: 0.8438

Epoch 00017: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_2_7
Epoch 18/20
24/24 [==============================] - 36s 2s/step - loss: 0.2199 - accuracy: 0.9286 - val_loss: 0.7814 - val_accuracy: 0.8125

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
24/24 [==============================] - 36s 1s/step - loss: 0.1958 - accuracy: 0.9228 - val_loss: 0.6116 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.84375
Epoch 20/20
24/24 [==============================] - 38s 2s/step - loss: 0.2256 - accuracy: 0.9248 - val_loss: 0.7338 - val_accuracy: 0.8438

Epoch 00020: val_accuracy did not improve from 0.84375
Test accuracy:  0.875
Epoch 1/20
24/24 [==============================] - 38s 2s/step - loss: 2.1888 - accuracy: 0.1636 - val_loss: 1.6348 - val_accuracy: 0.3125

Epoch 00001: val_accuracy improved from -inf to 0.31250, saving model to models\best_model_esc10_exp_2_8
Epoch 2/20
24/24 [==============================] - 30s 1s/step - loss: 1.8436 - accuracy: 0.2757 - val_loss: 1.0927 - val_accuracy: 0.6562

Epoch 00002: val_accuracy improved from 0.31250 to 0.65625, saving model to models\best_model_esc10_exp_2_8
Epoch 3/20
24/24 [==============================] - 33s 1s/step - loss: 1.4085 - accuracy: 0.4849 - val_loss: 0.8013 - val_accuracy: 0.7188

Epoch 00003: val_accuracy improved from 0.65625 to 0.71875, saving model to models\best_model_esc10_exp_2_8
Epoch 4/20
24/24 [==============================] - 34s 1s/step - loss: 1.1504 - accuracy: 0.5907 - val_loss: 0.5811 - val_accuracy: 0.8125

Epoch 00004: val_accuracy improved from 0.71875 to 0.81250, saving model to models\best_model_esc10_exp_2_8
Epoch 5/20
24/24 [==============================] - 34s 1s/step - loss: 0.8647 - accuracy: 0.6793 - val_loss: 0.6601 - val_accuracy: 0.7500

Epoch 00005: val_accuracy did not improve from 0.81250
Epoch 6/20
24/24 [==============================] - 33s 1s/step - loss: 0.7338 - accuracy: 0.7477 - val_loss: 0.5134 - val_accuracy: 0.8281

Epoch 00006: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_2_8
Epoch 7/20
24/24 [==============================] - 33s 1s/step - loss: 0.6760 - accuracy: 0.7544 - val_loss: 0.4368 - val_accuracy: 0.8438

Epoch 00007: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_2_8
Epoch 8/20
24/24 [==============================] - 31s 1s/step - loss: 0.5851 - accuracy: 0.7958 - val_loss: 0.5841 - val_accuracy: 0.7500

Epoch 00008: val_accuracy did not improve from 0.84375
Epoch 9/20
24/24 [==============================] - 32s 1s/step - loss: 0.5082 - accuracy: 0.8259 - val_loss: 0.5356 - val_accuracy: 0.7969

Epoch 00009: val_accuracy did not improve from 0.84375
Epoch 10/20
24/24 [==============================] - 33s 1s/step - loss: 0.4974 - accuracy: 0.8258 - val_loss: 0.4641 - val_accuracy: 0.8281

Epoch 00010: val_accuracy did not improve from 0.84375
Epoch 11/20
24/24 [==============================] - 33s 1s/step - loss: 0.4168 - accuracy: 0.8500 - val_loss: 0.7019 - val_accuracy: 0.7969

Epoch 00011: val_accuracy did not improve from 0.84375
Epoch 12/20
24/24 [==============================] - 33s 1s/step - loss: 0.4173 - accuracy: 0.8597 - val_loss: 0.3893 - val_accuracy: 0.8594

Epoch 00012: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_8
Epoch 13/20
24/24 [==============================] - 31s 1s/step - loss: 0.3150 - accuracy: 0.8814 - val_loss: 0.3841 - val_accuracy: 0.8594

Epoch 00013: val_accuracy did not improve from 0.85938
Epoch 14/20
24/24 [==============================] - 32s 1s/step - loss: 0.2677 - accuracy: 0.9124 - val_loss: 0.4109 - val_accuracy: 0.8438

Epoch 00014: val_accuracy did not improve from 0.85938
Epoch 15/20
24/24 [==============================] - 35s 1s/step - loss: 0.2626 - accuracy: 0.9061 - val_loss: 0.5420 - val_accuracy: 0.8906

Epoch 00015: val_accuracy improved from 0.85938 to 0.89062, saving model to models\best_model_esc10_exp_2_8
Epoch 16/20
24/24 [==============================] - 34s 1s/step - loss: 0.2265 - accuracy: 0.9211 - val_loss: 0.6572 - val_accuracy: 0.8125

Epoch 00016: val_accuracy did not improve from 0.89062
Epoch 17/20
24/24 [==============================] - 32s 1s/step - loss: 0.3010 - accuracy: 0.8999 - val_loss: 0.4218 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.89062
Epoch 18/20
24/24 [==============================] - 33s 1s/step - loss: 0.2809 - accuracy: 0.9028 - val_loss: 0.3989 - val_accuracy: 0.8125

Epoch 00018: val_accuracy did not improve from 0.89062
Epoch 19/20
24/24 [==============================] - 36s 2s/step - loss: 0.1856 - accuracy: 0.9393 - val_loss: 0.4561 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
24/24 [==============================] - 34s 1s/step - loss: 0.1589 - accuracy: 0.9509 - val_loss: 0.5367 - val_accuracy: 0.8750

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.8500000238418579
Epoch 1/20
24/24 [==============================] - 32s 1s/step - loss: 2.2384 - accuracy: 0.1587 - val_loss: 1.5028 - val_accuracy: 0.5000

Epoch 00001: val_accuracy improved from -inf to 0.50000, saving model to models\best_model_esc10_exp_2_9
Epoch 2/20
24/24 [==============================] - 33s 1s/step - loss: 1.7569 - accuracy: 0.3662 - val_loss: 1.1705 - val_accuracy: 0.6719

Epoch 00002: val_accuracy improved from 0.50000 to 0.67188, saving model to models\best_model_esc10_exp_2_9
Epoch 3/20
24/24 [==============================] - 33s 1s/step - loss: 1.3631 - accuracy: 0.4984 - val_loss: 0.6258 - val_accuracy: 0.7812

Epoch 00003: val_accuracy improved from 0.67188 to 0.78125, saving model to models\best_model_esc10_exp_2_9
Epoch 4/20
24/24 [==============================] - 34s 1s/step - loss: 1.0600 - accuracy: 0.6132 - val_loss: 0.7286 - val_accuracy: 0.7188

Epoch 00004: val_accuracy did not improve from 0.78125
Epoch 5/20
24/24 [==============================] - 34s 1s/step - loss: 0.8576 - accuracy: 0.6767 - val_loss: 0.5053 - val_accuracy: 0.8438

Epoch 00005: val_accuracy improved from 0.78125 to 0.84375, saving model to models\best_model_esc10_exp_2_9
Epoch 6/20
24/24 [==============================] - 34s 1s/step - loss: 0.7341 - accuracy: 0.7404 - val_loss: 0.5265 - val_accuracy: 0.8281

Epoch 00006: val_accuracy did not improve from 0.84375
Epoch 7/20
24/24 [==============================] - 32s 1s/step - loss: 0.5794 - accuracy: 0.7874 - val_loss: 0.5411 - val_accuracy: 0.8125

Epoch 00007: val_accuracy did not improve from 0.84375
Epoch 8/20
24/24 [==============================] - 31s 1s/step - loss: 0.5658 - accuracy: 0.7993 - val_loss: 0.6002 - val_accuracy: 0.8594

Epoch 00008: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_9
Epoch 9/20
24/24 [==============================] - 34s 1s/step - loss: 0.4611 - accuracy: 0.8275 - val_loss: 0.4451 - val_accuracy: 0.8750

Epoch 00009: val_accuracy improved from 0.85938 to 0.87500, saving model to models\best_model_esc10_exp_2_9
Epoch 10/20
24/24 [==============================] - 34s 1s/step - loss: 0.4344 - accuracy: 0.8405 - val_loss: 0.6171 - val_accuracy: 0.8438

Epoch 00010: val_accuracy did not improve from 0.87500
Epoch 11/20
24/24 [==============================] - 35s 1s/step - loss: 0.3876 - accuracy: 0.8644 - val_loss: 0.6599 - val_accuracy: 0.8438

Epoch 00011: val_accuracy did not improve from 0.87500
Epoch 12/20
24/24 [==============================] - 33s 1s/step - loss: 0.3814 - accuracy: 0.8821 - val_loss: 0.6878 - val_accuracy: 0.8438

Epoch 00012: val_accuracy did not improve from 0.87500
Epoch 13/20
24/24 [==============================] - 33s 1s/step - loss: 0.3332 - accuracy: 0.8861 - val_loss: 0.7092 - val_accuracy: 0.8594

Epoch 00013: val_accuracy did not improve from 0.87500
Epoch 14/20
24/24 [==============================] - 34s 1s/step - loss: 0.2858 - accuracy: 0.9012 - val_loss: 0.7722 - val_accuracy: 0.8594

Epoch 00014: val_accuracy did not improve from 0.87500
Epoch 15/20
24/24 [==============================] - 34s 1s/step - loss: 0.2809 - accuracy: 0.9019 - val_loss: 0.8357 - val_accuracy: 0.8438

Epoch 00015: val_accuracy did not improve from 0.87500
Epoch 16/20
24/24 [==============================] - 34s 1s/step - loss: 0.2373 - accuracy: 0.9184 - val_loss: 0.6839 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.87500
Epoch 17/20
24/24 [==============================] - 34s 1s/step - loss: 0.2237 - accuracy: 0.9203 - val_loss: 0.5257 - val_accuracy: 0.8750

Epoch 00017: val_accuracy did not improve from 0.87500
Epoch 18/20
24/24 [==============================] - 32s 1s/step - loss: 0.1795 - accuracy: 0.9405 - val_loss: 0.7604 - val_accuracy: 0.8750

Epoch 00018: val_accuracy did not improve from 0.87500
Epoch 19/20
24/24 [==============================] - 32s 1s/step - loss: 0.1708 - accuracy: 0.9368 - val_loss: 0.6803 - val_accuracy: 0.8750

Epoch 00019: val_accuracy did not improve from 0.87500
Epoch 20/20
24/24 [==============================] - 34s 1s/step - loss: 0.2089 - accuracy: 0.9244 - val_loss: 0.5418 - val_accuracy: 0.8594

Epoch 00020: val_accuracy did not improve from 0.87500
Test accuracy:  0.887499988079071
Epoch 1/20
24/24 [==============================] - 35s 1s/step - loss: 2.2030 - accuracy: 0.1560 - val_loss: 1.6415 - val_accuracy: 0.4219

Epoch 00001: val_accuracy improved from -inf to 0.42188, saving model to models\best_model_esc10_exp_2_10
Epoch 2/20
24/24 [==============================] - 35s 1s/step - loss: 1.8089 - accuracy: 0.3295 - val_loss: 1.1268 - val_accuracy: 0.5312

Epoch 00002: val_accuracy improved from 0.42188 to 0.53125, saving model to models\best_model_esc10_exp_2_10
Epoch 3/20
24/24 [==============================] - 34s 1s/step - loss: 1.3940 - accuracy: 0.4857 - val_loss: 0.8830 - val_accuracy: 0.6719

Epoch 00003: val_accuracy improved from 0.53125 to 0.67188, saving model to models\best_model_esc10_exp_2_10
Epoch 4/20
24/24 [==============================] - 33s 1s/step - loss: 1.1194 - accuracy: 0.6021 - val_loss: 0.5957 - val_accuracy: 0.7500

Epoch 00004: val_accuracy improved from 0.67188 to 0.75000, saving model to models\best_model_esc10_exp_2_10
Epoch 5/20
24/24 [==============================] - 34s 1s/step - loss: 0.8794 - accuracy: 0.6961 - val_loss: 0.6419 - val_accuracy: 0.7500

Epoch 00005: val_accuracy did not improve from 0.75000
Epoch 6/20
24/24 [==============================] - 34s 1s/step - loss: 0.7677 - accuracy: 0.7359 - val_loss: 0.9471 - val_accuracy: 0.7500

Epoch 00006: val_accuracy did not improve from 0.75000
Epoch 7/20
24/24 [==============================] - 32s 1s/step - loss: 0.6819 - accuracy: 0.7439 - val_loss: 0.5676 - val_accuracy: 0.8125

Epoch 00007: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_2_10
Epoch 8/20
24/24 [==============================] - 32s 1s/step - loss: 0.5922 - accuracy: 0.7962 - val_loss: 0.5509 - val_accuracy: 0.8125

Epoch 00008: val_accuracy did not improve from 0.81250
Epoch 9/20
24/24 [==============================] - 34s 1s/step - loss: 0.5604 - accuracy: 0.8131 - val_loss: 0.7469 - val_accuracy: 0.7969

Epoch 00009: val_accuracy did not improve from 0.81250
Epoch 10/20
24/24 [==============================] - 34s 1s/step - loss: 0.5091 - accuracy: 0.8221 - val_loss: 0.6165 - val_accuracy: 0.7969

Epoch 00010: val_accuracy did not improve from 0.81250
Epoch 11/20
24/24 [==============================] - 33s 1s/step - loss: 0.4495 - accuracy: 0.8470 - val_loss: 0.6182 - val_accuracy: 0.7656

Epoch 00011: val_accuracy did not improve from 0.81250
Epoch 12/20
24/24 [==============================] - 35s 1s/step - loss: 0.3708 - accuracy: 0.8727 - val_loss: 0.6630 - val_accuracy: 0.7812

Epoch 00012: val_accuracy did not improve from 0.81250
Epoch 13/20
24/24 [==============================] - 34s 1s/step - loss: 0.3666 - accuracy: 0.8684 - val_loss: 0.5464 - val_accuracy: 0.8125

Epoch 00013: val_accuracy did not improve from 0.81250
Epoch 14/20
24/24 [==============================] - 33s 1s/step - loss: 0.3254 - accuracy: 0.8806 - val_loss: 0.5866 - val_accuracy: 0.8438

Epoch 00014: val_accuracy improved from 0.81250 to 0.84375, saving model to models\best_model_esc10_exp_2_10
Epoch 15/20
24/24 [==============================] - 32s 1s/step - loss: 0.3196 - accuracy: 0.8786 - val_loss: 0.7349 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
24/24 [==============================] - 34s 1s/step - loss: 0.3146 - accuracy: 0.8912 - val_loss: 0.6807 - val_accuracy: 0.7969

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
24/24 [==============================] - 34s 1s/step - loss: 0.3325 - accuracy: 0.8822 - val_loss: 0.6613 - val_accuracy: 0.8281

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
24/24 [==============================] - 34s 1s/step - loss: 0.2573 - accuracy: 0.9094 - val_loss: 0.6153 - val_accuracy: 0.8594

Epoch 00018: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_10
Epoch 19/20
24/24 [==============================] - 33s 1s/step - loss: 0.2540 - accuracy: 0.9141 - val_loss: 0.8291 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.85938
Epoch 20/20
24/24 [==============================] - 33s 1s/step - loss: 0.2373 - accuracy: 0.9046 - val_loss: 0.8514 - val_accuracy: 0.8281

Epoch 00020: val_accuracy did not improve from 0.85938
Test accuracy:  0.862500011920929
Epoch 1/20
24/24 [==============================] - 37s 2s/step - loss: 2.1899 - accuracy: 0.1685 - val_loss: 1.6466 - val_accuracy: 0.3750

Epoch 00001: val_accuracy improved from -inf to 0.37500, saving model to models\best_model_esc10_exp_2_11
Epoch 2/20
24/24 [==============================] - 40s 2s/step - loss: 1.7849 - accuracy: 0.3382 - val_loss: 0.9321 - val_accuracy: 0.7188

Epoch 00002: val_accuracy improved from 0.37500 to 0.71875, saving model to models\best_model_esc10_exp_2_11
Epoch 3/20
24/24 [==============================] - 43s 2s/step - loss: 1.3281 - accuracy: 0.4855 - val_loss: 0.8137 - val_accuracy: 0.7188

Epoch 00003: val_accuracy did not improve from 0.71875
Epoch 4/20
24/24 [==============================] - 44s 2s/step - loss: 1.0493 - accuracy: 0.6157 - val_loss: 0.7002 - val_accuracy: 0.7031

Epoch 00004: val_accuracy did not improve from 0.71875
Epoch 5/20
24/24 [==============================] - 41s 2s/step - loss: 0.8806 - accuracy: 0.6825 - val_loss: 0.5359 - val_accuracy: 0.7656

Epoch 00005: val_accuracy improved from 0.71875 to 0.76562, saving model to models\best_model_esc10_exp_2_11
Epoch 6/20
24/24 [==============================] - 44s 2s/step - loss: 0.7382 - accuracy: 0.7449 - val_loss: 0.6184 - val_accuracy: 0.7656

Epoch 00006: val_accuracy did not improve from 0.76562
Epoch 7/20
24/24 [==============================] - 41s 2s/step - loss: 0.6617 - accuracy: 0.7512 - val_loss: 0.5068 - val_accuracy: 0.8438

Epoch 00007: val_accuracy improved from 0.76562 to 0.84375, saving model to models\best_model_esc10_exp_2_11
Epoch 8/20
24/24 [==============================] - 43s 2s/step - loss: 0.5857 - accuracy: 0.7922 - val_loss: 0.4987 - val_accuracy: 0.8125

Epoch 00008: val_accuracy did not improve from 0.84375
Epoch 9/20
24/24 [==============================] - 43s 2s/step - loss: 0.5277 - accuracy: 0.8253 - val_loss: 0.5370 - val_accuracy: 0.7812

Epoch 00009: val_accuracy did not improve from 0.84375
Epoch 10/20
24/24 [==============================] - 40s 2s/step - loss: 0.4266 - accuracy: 0.8509 - val_loss: 0.4465 - val_accuracy: 0.8594

Epoch 00010: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_11
Epoch 11/20
24/24 [==============================] - 42s 2s/step - loss: 0.4125 - accuracy: 0.8519 - val_loss: 0.5033 - val_accuracy: 0.8438

Epoch 00011: val_accuracy did not improve from 0.85938
Epoch 12/20
24/24 [==============================] - 44s 2s/step - loss: 0.3801 - accuracy: 0.8602 - val_loss: 0.5632 - val_accuracy: 0.8594

Epoch 00012: val_accuracy did not improve from 0.85938
Epoch 13/20
24/24 [==============================] - 43s 2s/step - loss: 0.3683 - accuracy: 0.8826 - val_loss: 0.5270 - val_accuracy: 0.8125

Epoch 00013: val_accuracy did not improve from 0.85938
Epoch 14/20
24/24 [==============================] - 41s 2s/step - loss: 0.3450 - accuracy: 0.8880 - val_loss: 0.4879 - val_accuracy: 0.8594

Epoch 00014: val_accuracy did not improve from 0.85938
Epoch 15/20
24/24 [==============================] - 44s 2s/step - loss: 0.2947 - accuracy: 0.8872 - val_loss: 0.5092 - val_accuracy: 0.8281

Epoch 00015: val_accuracy did not improve from 0.85938
Epoch 16/20
24/24 [==============================] - 41s 2s/step - loss: 0.2877 - accuracy: 0.9031 - val_loss: 0.5290 - val_accuracy: 0.8125

Epoch 00016: val_accuracy did not improve from 0.85938
Epoch 17/20
24/24 [==============================] - 44s 2s/step - loss: 0.3022 - accuracy: 0.8989 - val_loss: 0.6553 - val_accuracy: 0.8281

Epoch 00017: val_accuracy did not improve from 0.85938
Epoch 18/20
24/24 [==============================] - 42s 2s/step - loss: 0.2609 - accuracy: 0.9143 - val_loss: 0.6865 - val_accuracy: 0.8438

Epoch 00018: val_accuracy did not improve from 0.85938
Epoch 19/20
24/24 [==============================] - 41s 2s/step - loss: 0.2214 - accuracy: 0.9246 - val_loss: 0.4964 - val_accuracy: 0.8594

Epoch 00019: val_accuracy did not improve from 0.85938
Epoch 20/20
24/24 [==============================] - 44s 2s/step - loss: 0.1826 - accuracy: 0.9382 - val_loss: 0.6461 - val_accuracy: 0.8438

Epoch 00020: val_accuracy did not improve from 0.85938
Test accuracy:  0.8125
Epoch 1/20
24/24 [==============================] - 44s 2s/step - loss: 2.2379 - accuracy: 0.1538 - val_loss: 1.6862 - val_accuracy: 0.3906

Epoch 00001: val_accuracy improved from -inf to 0.39062, saving model to models\best_model_esc10_exp_2_12
Epoch 2/20
24/24 [==============================] - 46s 2s/step - loss: 1.8914 - accuracy: 0.2716 - val_loss: 1.3562 - val_accuracy: 0.5312

Epoch 00002: val_accuracy improved from 0.39062 to 0.53125, saving model to models\best_model_esc10_exp_2_12
Epoch 3/20
24/24 [==============================] - 44s 2s/step - loss: 1.5593 - accuracy: 0.4208 - val_loss: 1.0223 - val_accuracy: 0.6719

Epoch 00003: val_accuracy improved from 0.53125 to 0.67188, saving model to models\best_model_esc10_exp_2_12
Epoch 4/20
24/24 [==============================] - 43s 2s/step - loss: 1.2542 - accuracy: 0.5241 - val_loss: 0.7800 - val_accuracy: 0.7656

Epoch 00004: val_accuracy improved from 0.67188 to 0.76562, saving model to models\best_model_esc10_exp_2_12
Epoch 5/20
24/24 [==============================] - 47s 2s/step - loss: 1.0784 - accuracy: 0.6047 - val_loss: 0.6450 - val_accuracy: 0.7656

Epoch 00005: val_accuracy did not improve from 0.76562
Epoch 6/20
24/24 [==============================] - 46s 2s/step - loss: 0.9391 - accuracy: 0.6616 - val_loss: 0.5737 - val_accuracy: 0.7969

Epoch 00006: val_accuracy improved from 0.76562 to 0.79688, saving model to models\best_model_esc10_exp_2_12
Epoch 7/20
24/24 [==============================] - 43s 2s/step - loss: 0.7690 - accuracy: 0.7146 - val_loss: 0.4658 - val_accuracy: 0.7656

Epoch 00007: val_accuracy did not improve from 0.79688
Epoch 8/20
24/24 [==============================] - 46s 2s/step - loss: 0.7384 - accuracy: 0.7530 - val_loss: 0.5020 - val_accuracy: 0.8281

Epoch 00008: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_2_12
Epoch 9/20
24/24 [==============================] - 43s 2s/step - loss: 0.6585 - accuracy: 0.7578 - val_loss: 0.5432 - val_accuracy: 0.7500

Epoch 00009: val_accuracy did not improve from 0.82812
Epoch 10/20
24/24 [==============================] - 47s 2s/step - loss: 0.5499 - accuracy: 0.7904 - val_loss: 0.4592 - val_accuracy: 0.8594

Epoch 00010: val_accuracy improved from 0.82812 to 0.85938, saving model to models\best_model_esc10_exp_2_12
Epoch 11/20
24/24 [==============================] - 45s 2s/step - loss: 0.4545 - accuracy: 0.8439 - val_loss: 0.4085 - val_accuracy: 0.9219

Epoch 00011: val_accuracy improved from 0.85938 to 0.92188, saving model to models\best_model_esc10_exp_2_12
Epoch 12/20
24/24 [==============================] - 44s 2s/step - loss: 0.4242 - accuracy: 0.8384 - val_loss: 0.4971 - val_accuracy: 0.8906

Epoch 00012: val_accuracy did not improve from 0.92188
Epoch 13/20
24/24 [==============================] - 46s 2s/step - loss: 0.3547 - accuracy: 0.8739 - val_loss: 0.5222 - val_accuracy: 0.8750

Epoch 00013: val_accuracy did not improve from 0.92188
Epoch 14/20
24/24 [==============================] - 42s 2s/step - loss: 0.3830 - accuracy: 0.8649 - val_loss: 0.4545 - val_accuracy: 0.8594

Epoch 00014: val_accuracy did not improve from 0.92188
Epoch 15/20
24/24 [==============================] - 44s 2s/step - loss: 0.3254 - accuracy: 0.8894 - val_loss: 0.4914 - val_accuracy: 0.8906

Epoch 00015: val_accuracy did not improve from 0.92188
Epoch 16/20
24/24 [==============================] - 46s 2s/step - loss: 0.2907 - accuracy: 0.9004 - val_loss: 0.6117 - val_accuracy: 0.8125

Epoch 00016: val_accuracy did not improve from 0.92188
Epoch 17/20
24/24 [==============================] - 46s 2s/step - loss: 0.2703 - accuracy: 0.9021 - val_loss: 0.5071 - val_accuracy: 0.9062

Epoch 00017: val_accuracy did not improve from 0.92188
Epoch 18/20
24/24 [==============================] - 44s 2s/step - loss: 0.2000 - accuracy: 0.9241 - val_loss: 0.6029 - val_accuracy: 0.8438

Epoch 00018: val_accuracy did not improve from 0.92188
Epoch 19/20
24/24 [==============================] - 43s 2s/step - loss: 0.2722 - accuracy: 0.9082 - val_loss: 0.4757 - val_accuracy: 0.9062

Epoch 00019: val_accuracy did not improve from 0.92188
Epoch 20/20
24/24 [==============================] - 47s 2s/step - loss: 0.2084 - accuracy: 0.9214 - val_loss: 0.5788 - val_accuracy: 0.8438

Epoch 00020: val_accuracy did not improve from 0.92188
Test accuracy:  0.8500000238418579
Epoch 1/20
24/24 [==============================] - 49s 2s/step - loss: 2.2074 - accuracy: 0.1857 - val_loss: 1.5305 - val_accuracy: 0.4219

Epoch 00001: val_accuracy improved from -inf to 0.42188, saving model to models\best_model_esc10_exp_2_13
Epoch 2/20
24/24 [==============================] - 43s 2s/step - loss: 1.7184 - accuracy: 0.3455 - val_loss: 1.0500 - val_accuracy: 0.6719

Epoch 00002: val_accuracy improved from 0.42188 to 0.67188, saving model to models\best_model_esc10_exp_2_13
Epoch 3/20
24/24 [==============================] - 47s 2s/step - loss: 1.2736 - accuracy: 0.5307 - val_loss: 0.7921 - val_accuracy: 0.7500

Epoch 00003: val_accuracy improved from 0.67188 to 0.75000, saving model to models\best_model_esc10_exp_2_13
Epoch 4/20
24/24 [==============================] - 47s 2s/step - loss: 1.0314 - accuracy: 0.6457 - val_loss: 0.6527 - val_accuracy: 0.6875

Epoch 00004: val_accuracy did not improve from 0.75000
Epoch 5/20
24/24 [==============================] - 44s 2s/step - loss: 0.8488 - accuracy: 0.6976 - val_loss: 0.5906 - val_accuracy: 0.7344

Epoch 00005: val_accuracy did not improve from 0.75000
Epoch 6/20
24/24 [==============================] - 47s 2s/step - loss: 0.7864 - accuracy: 0.7139 - val_loss: 0.5961 - val_accuracy: 0.8125

Epoch 00006: val_accuracy improved from 0.75000 to 0.81250, saving model to models\best_model_esc10_exp_2_13
Epoch 7/20
24/24 [==============================] - 43s 2s/step - loss: 0.6097 - accuracy: 0.7674 - val_loss: 0.5388 - val_accuracy: 0.8438

Epoch 00007: val_accuracy improved from 0.81250 to 0.84375, saving model to models\best_model_esc10_exp_2_13
Epoch 8/20
24/24 [==============================] - 45s 2s/step - loss: 0.5181 - accuracy: 0.8221 - val_loss: 0.6510 - val_accuracy: 0.7969

Epoch 00008: val_accuracy did not improve from 0.84375
Epoch 9/20
24/24 [==============================] - 47s 2s/step - loss: 0.4915 - accuracy: 0.8322 - val_loss: 0.5725 - val_accuracy: 0.8125

Epoch 00009: val_accuracy did not improve from 0.84375
Epoch 10/20
24/24 [==============================] - 46s 2s/step - loss: 0.4510 - accuracy: 0.8369 - val_loss: 0.4648 - val_accuracy: 0.8125

Epoch 00010: val_accuracy did not improve from 0.84375
Epoch 11/20
24/24 [==============================] - 42s 2s/step - loss: 0.4716 - accuracy: 0.8307 - val_loss: 0.4601 - val_accuracy: 0.8438

Epoch 00011: val_accuracy did not improve from 0.84375
Epoch 12/20
24/24 [==============================] - 47s 2s/step - loss: 0.3743 - accuracy: 0.8554 - val_loss: 0.4852 - val_accuracy: 0.8594

Epoch 00012: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_13
Epoch 13/20
24/24 [==============================] - 46s 2s/step - loss: 0.3402 - accuracy: 0.8805 - val_loss: 0.4645 - val_accuracy: 0.8438

Epoch 00013: val_accuracy did not improve from 0.85938
Epoch 14/20
24/24 [==============================] - 42s 2s/step - loss: 0.3026 - accuracy: 0.8953 - val_loss: 0.4923 - val_accuracy: 0.8750

Epoch 00014: val_accuracy improved from 0.85938 to 0.87500, saving model to models\best_model_esc10_exp_2_13
Epoch 15/20
24/24 [==============================] - 46s 2s/step - loss: 0.2533 - accuracy: 0.9134 - val_loss: 0.7101 - val_accuracy: 0.8438

Epoch 00015: val_accuracy did not improve from 0.87500
Epoch 16/20
24/24 [==============================] - 48s 2s/step - loss: 0.2560 - accuracy: 0.9216 - val_loss: 0.3927 - val_accuracy: 0.9062

Epoch 00016: val_accuracy improved from 0.87500 to 0.90625, saving model to models\best_model_esc10_exp_2_13
Epoch 17/20
24/24 [==============================] - 44s 2s/step - loss: 0.2319 - accuracy: 0.9218 - val_loss: 0.3303 - val_accuracy: 0.8906

Epoch 00017: val_accuracy did not improve from 0.90625
Epoch 18/20
24/24 [==============================] - 43s 2s/step - loss: 0.2045 - accuracy: 0.9276 - val_loss: 0.8744 - val_accuracy: 0.8281

Epoch 00018: val_accuracy did not improve from 0.90625
Epoch 19/20
24/24 [==============================] - 47s 2s/step - loss: 0.3212 - accuracy: 0.8997 - val_loss: 0.5456 - val_accuracy: 0.8906

Epoch 00019: val_accuracy did not improve from 0.90625
Epoch 20/20
24/24 [==============================] - 46s 2s/step - loss: 0.1716 - accuracy: 0.9412 - val_loss: 0.5160 - val_accuracy: 0.8594

Epoch 00020: val_accuracy did not improve from 0.90625
Test accuracy:  0.875
Epoch 1/20
24/24 [==============================] - 42s 2s/step - loss: 2.2611 - accuracy: 0.1577 - val_loss: 1.6038 - val_accuracy: 0.3906

Epoch 00001: val_accuracy improved from -inf to 0.39062, saving model to models\best_model_esc10_exp_2_14
Epoch 2/20
24/24 [==============================] - 47s 2s/step - loss: 1.7383 - accuracy: 0.3571 - val_loss: 1.0617 - val_accuracy: 0.5938

Epoch 00002: val_accuracy improved from 0.39062 to 0.59375, saving model to models\best_model_esc10_exp_2_14
Epoch 3/20
24/24 [==============================] - 48s 2s/step - loss: 1.2488 - accuracy: 0.5363 - val_loss: 0.8036 - val_accuracy: 0.7344

Epoch 00003: val_accuracy improved from 0.59375 to 0.73438, saving model to models\best_model_esc10_exp_2_14
Epoch 4/20
24/24 [==============================] - 43s 2s/step - loss: 1.0275 - accuracy: 0.6241 - val_loss: 0.5948 - val_accuracy: 0.8125

Epoch 00004: val_accuracy improved from 0.73438 to 0.81250, saving model to models\best_model_esc10_exp_2_14
Epoch 5/20
24/24 [==============================] - 46s 2s/step - loss: 0.8228 - accuracy: 0.7018 - val_loss: 0.5600 - val_accuracy: 0.8125

Epoch 00005: val_accuracy did not improve from 0.81250
Epoch 6/20
24/24 [==============================] - 48s 2s/step - loss: 0.7294 - accuracy: 0.7227 - val_loss: 0.5023 - val_accuracy: 0.8281

Epoch 00006: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_2_14
Epoch 7/20
24/24 [==============================] - 46s 2s/step - loss: 0.6386 - accuracy: 0.7612 - val_loss: 0.5125 - val_accuracy: 0.8750

Epoch 00007: val_accuracy improved from 0.82812 to 0.87500, saving model to models\best_model_esc10_exp_2_14
Epoch 8/20
24/24 [==============================] - 43s 2s/step - loss: 0.5170 - accuracy: 0.8136 - val_loss: 0.4974 - val_accuracy: 0.8594

Epoch 00008: val_accuracy did not improve from 0.87500
Epoch 9/20
24/24 [==============================] - 48s 2s/step - loss: 0.4345 - accuracy: 0.8513 - val_loss: 0.5770 - val_accuracy: 0.8125

Epoch 00009: val_accuracy did not improve from 0.87500
Epoch 10/20
24/24 [==============================] - 47s 2s/step - loss: 0.4542 - accuracy: 0.8395 - val_loss: 0.5925 - val_accuracy: 0.8438

Epoch 00010: val_accuracy did not improve from 0.87500
Epoch 11/20
24/24 [==============================] - 42s 2s/step - loss: 0.3569 - accuracy: 0.8753 - val_loss: 0.4445 - val_accuracy: 0.8594

Epoch 00011: val_accuracy did not improve from 0.87500
Epoch 12/20
24/24 [==============================] - 48s 2s/step - loss: 0.3616 - accuracy: 0.8695 - val_loss: 0.7866 - val_accuracy: 0.8438

Epoch 00012: val_accuracy did not improve from 0.87500
Epoch 13/20
24/24 [==============================] - 48s 2s/step - loss: 0.3505 - accuracy: 0.8862 - val_loss: 0.5203 - val_accuracy: 0.8594

Epoch 00013: val_accuracy did not improve from 0.87500
Epoch 14/20
24/24 [==============================] - 44s 2s/step - loss: 0.2578 - accuracy: 0.8984 - val_loss: 0.8590 - val_accuracy: 0.7969

Epoch 00014: val_accuracy did not improve from 0.87500
Epoch 15/20
24/24 [==============================] - 45s 2s/step - loss: 0.4079 - accuracy: 0.8630 - val_loss: 0.7348 - val_accuracy: 0.8281

Epoch 00015: val_accuracy did not improve from 0.87500
Epoch 16/20
24/24 [==============================] - 48s 2s/step - loss: 0.2832 - accuracy: 0.8909 - val_loss: 0.6931 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.87500
Epoch 17/20
24/24 [==============================] - 45s 2s/step - loss: 0.2634 - accuracy: 0.9145 - val_loss: 0.7352 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.87500
Epoch 18/20
24/24 [==============================] - 44s 2s/step - loss: 0.2049 - accuracy: 0.9304 - val_loss: 0.5059 - val_accuracy: 0.8906

Epoch 00018: val_accuracy improved from 0.87500 to 0.89062, saving model to models\best_model_esc10_exp_2_14
Epoch 19/20
24/24 [==============================] - 48s 2s/step - loss: 0.1903 - accuracy: 0.9337 - val_loss: 0.5940 - val_accuracy: 0.8594

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
24/24 [==============================] - 47s 2s/step - loss: 0.2198 - accuracy: 0.9287 - val_loss: 0.6654 - val_accuracy: 0.8750

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.862500011920929
Epoch 1/20
24/24 [==============================] - 46s 2s/step - loss: 2.2236 - accuracy: 0.1845 - val_loss: 1.6031 - val_accuracy: 0.3906

Epoch 00001: val_accuracy improved from -inf to 0.39062, saving model to models\best_model_esc10_exp_2_15
Epoch 2/20
24/24 [==============================] - 50s 2s/step - loss: 1.7679 - accuracy: 0.3179 - val_loss: 0.9547 - val_accuracy: 0.6875

Epoch 00002: val_accuracy improved from 0.39062 to 0.68750, saving model to models\best_model_esc10_exp_2_15
Epoch 3/20
24/24 [==============================] - 45s 2s/step - loss: 1.1865 - accuracy: 0.5767 - val_loss: 0.7104 - val_accuracy: 0.8438

Epoch 00003: val_accuracy improved from 0.68750 to 0.84375, saving model to models\best_model_esc10_exp_2_15
Epoch 4/20
24/24 [==============================] - 47s 2s/step - loss: 0.9814 - accuracy: 0.6381 - val_loss: 0.5241 - val_accuracy: 0.7812

Epoch 00004: val_accuracy did not improve from 0.84375
Epoch 5/20
24/24 [==============================] - 50s 2s/step - loss: 0.8722 - accuracy: 0.7033 - val_loss: 0.5893 - val_accuracy: 0.8125

Epoch 00005: val_accuracy did not improve from 0.84375
Epoch 6/20
24/24 [==============================] - 46s 2s/step - loss: 0.7697 - accuracy: 0.7083 - val_loss: 0.5286 - val_accuracy: 0.7969

Epoch 00006: val_accuracy did not improve from 0.84375
Epoch 7/20
24/24 [==============================] - 47s 2s/step - loss: 0.6264 - accuracy: 0.7674 - val_loss: 0.6245 - val_accuracy: 0.7969

Epoch 00007: val_accuracy did not improve from 0.84375
Epoch 8/20
24/24 [==============================] - 50s 2s/step - loss: 0.5960 - accuracy: 0.7796 - val_loss: 0.5894 - val_accuracy: 0.7656

Epoch 00008: val_accuracy did not improve from 0.84375
Epoch 9/20
24/24 [==============================] - 48s 2s/step - loss: 0.5418 - accuracy: 0.7930 - val_loss: 0.4836 - val_accuracy: 0.7969

Epoch 00009: val_accuracy did not improve from 0.84375
Epoch 10/20
24/24 [==============================] - 45s 2s/step - loss: 0.5487 - accuracy: 0.8177 - val_loss: 0.5913 - val_accuracy: 0.8281

Epoch 00010: val_accuracy did not improve from 0.84375
Epoch 11/20
24/24 [==============================] - 49s 2s/step - loss: 0.4363 - accuracy: 0.8496 - val_loss: 0.5890 - val_accuracy: 0.7969

Epoch 00011: val_accuracy did not improve from 0.84375
Epoch 12/20
24/24 [==============================] - 48s 2s/step - loss: 0.4285 - accuracy: 0.8561 - val_loss: 0.6310 - val_accuracy: 0.8281

Epoch 00012: val_accuracy did not improve from 0.84375
Epoch 13/20
24/24 [==============================] - 44s 2s/step - loss: 0.3796 - accuracy: 0.8786 - val_loss: 0.7758 - val_accuracy: 0.8281

Epoch 00013: val_accuracy did not improve from 0.84375
Epoch 14/20
24/24 [==============================] - 50s 2s/step - loss: 0.3241 - accuracy: 0.8799 - val_loss: 0.5352 - val_accuracy: 0.8281

Epoch 00014: val_accuracy did not improve from 0.84375
Epoch 15/20
24/24 [==============================] - 48s 2s/step - loss: 0.3727 - accuracy: 0.8670 - val_loss: 0.5082 - val_accuracy: 0.7812

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
24/24 [==============================] - 44s 2s/step - loss: 0.2945 - accuracy: 0.8950 - val_loss: 0.6408 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.84375
Epoch 17/20
24/24 [==============================] - 50s 2s/step - loss: 0.2325 - accuracy: 0.9194 - val_loss: 0.6009 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.84375
Epoch 18/20
24/24 [==============================] - 48s 2s/step - loss: 0.2505 - accuracy: 0.9083 - val_loss: 0.6594 - val_accuracy: 0.7812

Epoch 00018: val_accuracy did not improve from 0.84375
Epoch 19/20
24/24 [==============================] - 44s 2s/step - loss: 0.2168 - accuracy: 0.9270 - val_loss: 0.9029 - val_accuracy: 0.7969

Epoch 00019: val_accuracy did not improve from 0.84375
Epoch 20/20
24/24 [==============================] - 50s 2s/step - loss: 0.2855 - accuracy: 0.8913 - val_loss: 0.6991 - val_accuracy: 0.8438

Epoch 00020: val_accuracy did not improve from 0.84375
Test accuracy:  0.762499988079071
Epoch 1/20
24/24 [==============================] - 51s 2s/step - loss: 2.1784 - accuracy: 0.1839 - val_loss: 1.7879 - val_accuracy: 0.2812

Epoch 00001: val_accuracy improved from -inf to 0.28125, saving model to models\best_model_esc10_exp_2_16
Epoch 2/20
24/24 [==============================] - 39s 2s/step - loss: 1.8483 - accuracy: 0.2777 - val_loss: 1.4173 - val_accuracy: 0.4062

Epoch 00002: val_accuracy improved from 0.28125 to 0.40625, saving model to models\best_model_esc10_exp_2_16
Epoch 3/20
24/24 [==============================] - 44s 2s/step - loss: 1.5310 - accuracy: 0.4211 - val_loss: 0.8377 - val_accuracy: 0.6406

Epoch 00003: val_accuracy improved from 0.40625 to 0.64062, saving model to models\best_model_esc10_exp_2_16
Epoch 4/20
24/24 [==============================] - 44s 2s/step - loss: 1.1797 - accuracy: 0.5722 - val_loss: 0.6429 - val_accuracy: 0.7500

Epoch 00004: val_accuracy improved from 0.64062 to 0.75000, saving model to models\best_model_esc10_exp_2_16
Epoch 5/20
24/24 [==============================] - 42s 2s/step - loss: 0.9982 - accuracy: 0.6392 - val_loss: 0.5605 - val_accuracy: 0.7812

Epoch 00005: val_accuracy improved from 0.75000 to 0.78125, saving model to models\best_model_esc10_exp_2_16
Epoch 6/20
24/24 [==============================] - 40s 2s/step - loss: 0.7835 - accuracy: 0.7191 - val_loss: 0.5880 - val_accuracy: 0.7969

Epoch 00006: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_2_16
Epoch 7/20
24/24 [==============================] - 44s 2s/step - loss: 0.7139 - accuracy: 0.7358 - val_loss: 0.5556 - val_accuracy: 0.8281

Epoch 00007: val_accuracy improved from 0.79688 to 0.82812, saving model to models\best_model_esc10_exp_2_16
Epoch 8/20
24/24 [==============================] - 44s 2s/step - loss: 0.5614 - accuracy: 0.8052 - val_loss: 0.6249 - val_accuracy: 0.8125

Epoch 00008: val_accuracy did not improve from 0.82812
Epoch 9/20
24/24 [==============================] - 42s 2s/step - loss: 0.5943 - accuracy: 0.7915 - val_loss: 0.6013 - val_accuracy: 0.8281

Epoch 00009: val_accuracy did not improve from 0.82812
Epoch 10/20
24/24 [==============================] - 42s 2s/step - loss: 0.4708 - accuracy: 0.8150 - val_loss: 0.4824 - val_accuracy: 0.8438

Epoch 00010: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_2_16
Epoch 11/20
24/24 [==============================] - 44s 2s/step - loss: 0.4015 - accuracy: 0.8526 - val_loss: 0.5333 - val_accuracy: 0.8438

Epoch 00011: val_accuracy did not improve from 0.84375
Epoch 12/20
24/24 [==============================] - 40s 2s/step - loss: 0.4550 - accuracy: 0.8292 - val_loss: 0.6025 - val_accuracy: 0.8281

Epoch 00012: val_accuracy did not improve from 0.84375
Epoch 13/20
24/24 [==============================] - 41s 2s/step - loss: 0.4059 - accuracy: 0.8524 - val_loss: 0.4908 - val_accuracy: 0.8438

Epoch 00013: val_accuracy did not improve from 0.84375
Epoch 14/20
24/24 [==============================] - 45s 2s/step - loss: 0.3057 - accuracy: 0.8883 - val_loss: 0.5868 - val_accuracy: 0.8438

Epoch 00014: val_accuracy did not improve from 0.84375
Epoch 15/20
24/24 [==============================] - 42s 2s/step - loss: 0.3182 - accuracy: 0.8882 - val_loss: 0.6127 - val_accuracy: 0.8438

Epoch 00015: val_accuracy did not improve from 0.84375
Epoch 16/20
24/24 [==============================] - 39s 2s/step - loss: 0.2884 - accuracy: 0.8967 - val_loss: 0.4601 - val_accuracy: 0.9062

Epoch 00016: val_accuracy improved from 0.84375 to 0.90625, saving model to models\best_model_esc10_exp_2_16
Epoch 17/20
24/24 [==============================] - 44s 2s/step - loss: 0.2719 - accuracy: 0.9011 - val_loss: 0.7409 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.90625
Epoch 18/20
24/24 [==============================] - 44s 2s/step - loss: 0.2508 - accuracy: 0.9124 - val_loss: 0.6617 - val_accuracy: 0.8594

Epoch 00018: val_accuracy did not improve from 0.90625
Epoch 19/20
24/24 [==============================] - 42s 2s/step - loss: 0.2350 - accuracy: 0.9181 - val_loss: 0.4322 - val_accuracy: 0.8750

Epoch 00019: val_accuracy did not improve from 0.90625
Epoch 20/20
24/24 [==============================] - 43s 2s/step - loss: 0.1783 - accuracy: 0.9407 - val_loss: 0.6982 - val_accuracy: 0.8750

Epoch 00020: val_accuracy did not improve from 0.90625
Test accuracy:  0.800000011920929
Epoch 1/20
24/24 [==============================] - 44s 2s/step - loss: 2.2209 - accuracy: 0.1609 - val_loss: 1.6956 - val_accuracy: 0.2812

Epoch 00001: val_accuracy improved from -inf to 0.28125, saving model to models\best_model_esc10_exp_2_17
Epoch 2/20
24/24 [==============================] - 41s 2s/step - loss: 1.7382 - accuracy: 0.3277 - val_loss: 1.1315 - val_accuracy: 0.5469

Epoch 00002: val_accuracy improved from 0.28125 to 0.54688, saving model to models\best_model_esc10_exp_2_17
Epoch 3/20
24/24 [==============================] - 39s 2s/step - loss: 1.3762 - accuracy: 0.4827 - val_loss: 0.8004 - val_accuracy: 0.7031

Epoch 00003: val_accuracy improved from 0.54688 to 0.70312, saving model to models\best_model_esc10_exp_2_17
Epoch 4/20
24/24 [==============================] - 43s 2s/step - loss: 1.0185 - accuracy: 0.6327 - val_loss: 0.6230 - val_accuracy: 0.7812

Epoch 00004: val_accuracy improved from 0.70312 to 0.78125, saving model to models\best_model_esc10_exp_2_17
Epoch 5/20
24/24 [==============================] - 43s 2s/step - loss: 0.8598 - accuracy: 0.7012 - val_loss: 0.5661 - val_accuracy: 0.8125

Epoch 00005: val_accuracy improved from 0.78125 to 0.81250, saving model to models\best_model_esc10_exp_2_17
Epoch 6/20
24/24 [==============================] - 40s 2s/step - loss: 0.6986 - accuracy: 0.7301 - val_loss: 0.4909 - val_accuracy: 0.7500

Epoch 00006: val_accuracy did not improve from 0.81250
Epoch 7/20
24/24 [==============================] - 43s 2s/step - loss: 0.6040 - accuracy: 0.7763 - val_loss: 0.6006 - val_accuracy: 0.7812

Epoch 00007: val_accuracy did not improve from 0.81250
Epoch 8/20
24/24 [==============================] - 40s 2s/step - loss: 0.5731 - accuracy: 0.7852 - val_loss: 0.5485 - val_accuracy: 0.8125

Epoch 00008: val_accuracy did not improve from 0.81250
Epoch 9/20
24/24 [==============================] - 44s 2s/step - loss: 0.4955 - accuracy: 0.8210 - val_loss: 0.4770 - val_accuracy: 0.7656

Epoch 00009: val_accuracy did not improve from 0.81250
Epoch 10/20
24/24 [==============================] - 43s 2s/step - loss: 0.4247 - accuracy: 0.8441 - val_loss: 0.4919 - val_accuracy: 0.8125

Epoch 00010: val_accuracy did not improve from 0.81250
Epoch 11/20
24/24 [==============================] - 41s 2s/step - loss: 0.4043 - accuracy: 0.8571 - val_loss: 0.5259 - val_accuracy: 0.8438

Epoch 00011: val_accuracy improved from 0.81250 to 0.84375, saving model to models\best_model_esc10_exp_2_17
Epoch 12/20
24/24 [==============================] - 43s 2s/step - loss: 0.3163 - accuracy: 0.8847 - val_loss: 0.5053 - val_accuracy: 0.8594

Epoch 00012: val_accuracy improved from 0.84375 to 0.85938, saving model to models\best_model_esc10_exp_2_17
Epoch 13/20
24/24 [==============================] - 42s 2s/step - loss: 0.2683 - accuracy: 0.8952 - val_loss: 0.4785 - val_accuracy: 0.8438

Epoch 00013: val_accuracy did not improve from 0.85938
Epoch 14/20
24/24 [==============================] - 38s 2s/step - loss: 0.2501 - accuracy: 0.9041 - val_loss: 0.4767 - val_accuracy: 0.8906

Epoch 00014: val_accuracy improved from 0.85938 to 0.89062, saving model to models\best_model_esc10_exp_2_17
Epoch 15/20
24/24 [==============================] - 42s 2s/step - loss: 0.2361 - accuracy: 0.9218 - val_loss: 0.6567 - val_accuracy: 0.8750

Epoch 00015: val_accuracy did not improve from 0.89062
Epoch 16/20
24/24 [==============================] - 44s 2s/step - loss: 0.2505 - accuracy: 0.9111 - val_loss: 0.6832 - val_accuracy: 0.8438

Epoch 00016: val_accuracy did not improve from 0.89062
Epoch 17/20
24/24 [==============================] - 42s 2s/step - loss: 0.2815 - accuracy: 0.9004 - val_loss: 0.5924 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.89062
Epoch 18/20
24/24 [==============================] - 38s 2s/step - loss: 0.2044 - accuracy: 0.9242 - val_loss: 0.6522 - val_accuracy: 0.8906

Epoch 00018: val_accuracy did not improve from 0.89062
Epoch 19/20
24/24 [==============================] - 43s 2s/step - loss: 0.1862 - accuracy: 0.9368 - val_loss: 0.6199 - val_accuracy: 0.8750

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
24/24 [==============================] - 43s 2s/step - loss: 0.1679 - accuracy: 0.9420 - val_loss: 0.7897 - val_accuracy: 0.8438

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.8500000238418579
Epoch 1/20
24/24 [==============================] - 42s 2s/step - loss: 2.1653 - accuracy: 0.1862 - val_loss: 1.5746 - val_accuracy: 0.4531

Epoch 00001: val_accuracy improved from -inf to 0.45312, saving model to models\best_model_esc10_exp_2_18
Epoch 2/20
24/24 [==============================] - 39s 2s/step - loss: 1.6566 - accuracy: 0.3853 - val_loss: 0.9789 - val_accuracy: 0.5938

Epoch 00002: val_accuracy improved from 0.45312 to 0.59375, saving model to models\best_model_esc10_exp_2_18
Epoch 3/20
24/24 [==============================] - 44s 2s/step - loss: 1.2354 - accuracy: 0.5671 - val_loss: 0.7418 - val_accuracy: 0.7344

Epoch 00003: val_accuracy improved from 0.59375 to 0.73438, saving model to models\best_model_esc10_exp_2_18
Epoch 4/20
24/24 [==============================] - 43s 2s/step - loss: 1.0486 - accuracy: 0.6242 - val_loss: 0.5236 - val_accuracy: 0.7812

Epoch 00004: val_accuracy improved from 0.73438 to 0.78125, saving model to models\best_model_esc10_exp_2_18
Epoch 5/20
24/24 [==============================] - 41s 2s/step - loss: 0.8454 - accuracy: 0.6912 - val_loss: 0.5611 - val_accuracy: 0.7969

Epoch 00005: val_accuracy improved from 0.78125 to 0.79688, saving model to models\best_model_esc10_exp_2_18
Epoch 6/20
24/24 [==============================] - 39s 2s/step - loss: 0.6440 - accuracy: 0.7783 - val_loss: 0.5298 - val_accuracy: 0.8125

Epoch 00006: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_2_18
Epoch 7/20
24/24 [==============================] - 44s 2s/step - loss: 0.5653 - accuracy: 0.7835 - val_loss: 0.5131 - val_accuracy: 0.7812

Epoch 00007: val_accuracy did not improve from 0.81250
Epoch 8/20
24/24 [==============================] - 44s 2s/step - loss: 0.5370 - accuracy: 0.8101 - val_loss: 0.5588 - val_accuracy: 0.8281

Epoch 00008: val_accuracy improved from 0.81250 to 0.82812, saving model to models\best_model_esc10_exp_2_18
Epoch 9/20
24/24 [==============================] - 40s 2s/step - loss: 0.5009 - accuracy: 0.8208 - val_loss: 0.4511 - val_accuracy: 0.8438

Epoch 00009: val_accuracy improved from 0.82812 to 0.84375, saving model to models\best_model_esc10_exp_2_18
Epoch 10/20
24/24 [==============================] - 40s 2s/step - loss: 0.4331 - accuracy: 0.8705 - val_loss: 0.5704 - val_accuracy: 0.8281

Epoch 00010: val_accuracy did not improve from 0.84375
Epoch 11/20
24/24 [==============================] - 44s 2s/step - loss: 0.3708 - accuracy: 0.8587 - val_loss: 0.5082 - val_accuracy: 0.8750

Epoch 00011: val_accuracy improved from 0.84375 to 0.87500, saving model to models\best_model_esc10_exp_2_18
Epoch 12/20
24/24 [==============================] - 43s 2s/step - loss: 0.4318 - accuracy: 0.8387 - val_loss: 0.4630 - val_accuracy: 0.8594

Epoch 00012: val_accuracy did not improve from 0.87500
Epoch 13/20
24/24 [==============================] - 39s 2s/step - loss: 0.3305 - accuracy: 0.8776 - val_loss: 0.6365 - val_accuracy: 0.8281

Epoch 00013: val_accuracy did not improve from 0.87500
Epoch 14/20
24/24 [==============================] - 41s 2s/step - loss: 0.2709 - accuracy: 0.9053 - val_loss: 0.6706 - val_accuracy: 0.8125

Epoch 00014: val_accuracy did not improve from 0.87500
Epoch 15/20
24/24 [==============================] - 44s 2s/step - loss: 0.2787 - accuracy: 0.9024 - val_loss: 0.6791 - val_accuracy: 0.8906

Epoch 00015: val_accuracy improved from 0.87500 to 0.89062, saving model to models\best_model_esc10_exp_2_18
Epoch 16/20
24/24 [==============================] - 43s 2s/step - loss: 0.3046 - accuracy: 0.8828 - val_loss: 0.6951 - val_accuracy: 0.8594

Epoch 00016: val_accuracy did not improve from 0.89062
Epoch 17/20
24/24 [==============================] - 38s 2s/step - loss: 0.2326 - accuracy: 0.9129 - val_loss: 0.7138 - val_accuracy: 0.8438

Epoch 00017: val_accuracy did not improve from 0.89062
Epoch 18/20
24/24 [==============================] - 42s 2s/step - loss: 0.1928 - accuracy: 0.9312 - val_loss: 0.5405 - val_accuracy: 0.9219

Epoch 00018: val_accuracy improved from 0.89062 to 0.92188, saving model to models\best_model_esc10_exp_2_18
Epoch 19/20
24/24 [==============================] - 44s 2s/step - loss: 0.2110 - accuracy: 0.9155 - val_loss: 0.7208 - val_accuracy: 0.8438

Epoch 00019: val_accuracy did not improve from 0.92188
Epoch 20/20
24/24 [==============================] - 42s 2s/step - loss: 0.1893 - accuracy: 0.9396 - val_loss: 0.9510 - val_accuracy: 0.8750

Epoch 00020: val_accuracy did not improve from 0.92188
Test accuracy:  0.875
Epoch 1/20
24/24 [==============================] - 39s 2s/step - loss: 2.2409 - accuracy: 0.1543 - val_loss: 1.6489 - val_accuracy: 0.3906

Epoch 00001: val_accuracy improved from -inf to 0.39062, saving model to models\best_model_esc10_exp_2_19
Epoch 2/20
24/24 [==============================] - 44s 2s/step - loss: 1.7690 - accuracy: 0.3445 - val_loss: 1.0866 - val_accuracy: 0.6094

Epoch 00002: val_accuracy improved from 0.39062 to 0.60938, saving model to models\best_model_esc10_exp_2_19
Epoch 3/20
24/24 [==============================] - 44s 2s/step - loss: 1.3315 - accuracy: 0.5116 - val_loss: 0.8229 - val_accuracy: 0.6875

Epoch 00003: val_accuracy improved from 0.60938 to 0.68750, saving model to models\best_model_esc10_exp_2_19
Epoch 4/20
24/24 [==============================] - 40s 2s/step - loss: 1.1131 - accuracy: 0.5887 - val_loss: 0.7538 - val_accuracy: 0.7344

Epoch 00004: val_accuracy improved from 0.68750 to 0.73438, saving model to models\best_model_esc10_exp_2_19
Epoch 5/20
24/24 [==============================] - 41s 2s/step - loss: 0.8806 - accuracy: 0.6817 - val_loss: 0.5474 - val_accuracy: 0.7969

Epoch 00005: val_accuracy improved from 0.73438 to 0.79688, saving model to models\best_model_esc10_exp_2_19
Epoch 6/20
24/24 [==============================] - 44s 2s/step - loss: 0.8060 - accuracy: 0.7035 - val_loss: 0.5472 - val_accuracy: 0.8125

Epoch 00006: val_accuracy improved from 0.79688 to 0.81250, saving model to models\best_model_esc10_exp_2_19
Epoch 7/20
24/24 [==============================] - 43s 2s/step - loss: 0.6796 - accuracy: 0.7513 - val_loss: 0.5677 - val_accuracy: 0.8125

Epoch 00007: val_accuracy did not improve from 0.81250
Epoch 8/20
24/24 [==============================] - 39s 2s/step - loss: 0.6205 - accuracy: 0.7801 - val_loss: 0.6749 - val_accuracy: 0.7969

Epoch 00008: val_accuracy did not improve from 0.81250
Epoch 9/20
24/24 [==============================] - 42s 2s/step - loss: 0.5837 - accuracy: 0.7847 - val_loss: 0.5386 - val_accuracy: 0.8125

Epoch 00009: val_accuracy did not improve from 0.81250
Epoch 10/20
24/24 [==============================] - 44s 2s/step - loss: 0.4861 - accuracy: 0.8293 - val_loss: 0.5966 - val_accuracy: 0.8594

Epoch 00010: val_accuracy improved from 0.81250 to 0.85938, saving model to models\best_model_esc10_exp_2_19
Epoch 11/20
24/24 [==============================] - 41s 2s/step - loss: 0.4238 - accuracy: 0.8424 - val_loss: 0.6326 - val_accuracy: 0.8438

Epoch 00011: val_accuracy did not improve from 0.85938
Epoch 12/20
24/24 [==============================] - 43s 2s/step - loss: 0.4429 - accuracy: 0.8489 - val_loss: 0.5795 - val_accuracy: 0.8125

Epoch 00012: val_accuracy did not improve from 0.85938
Epoch 13/20
24/24 [==============================] - 43s 2s/step - loss: 0.3185 - accuracy: 0.8896 - val_loss: 0.5656 - val_accuracy: 0.8906

Epoch 00013: val_accuracy improved from 0.85938 to 0.89062, saving model to models\best_model_esc10_exp_2_19
Epoch 14/20
24/24 [==============================] - 40s 2s/step - loss: 0.3041 - accuracy: 0.8942 - val_loss: 0.5547 - val_accuracy: 0.8281

Epoch 00014: val_accuracy did not improve from 0.89062
Epoch 15/20
24/24 [==============================] - 42s 2s/step - loss: 0.3143 - accuracy: 0.8814 - val_loss: 0.6312 - val_accuracy: 0.8750

Epoch 00015: val_accuracy did not improve from 0.89062
Epoch 16/20
24/24 [==============================] - 44s 2s/step - loss: 0.2463 - accuracy: 0.9118 - val_loss: 0.6552 - val_accuracy: 0.8281

Epoch 00016: val_accuracy did not improve from 0.89062
Epoch 17/20
24/24 [==============================] - 41s 2s/step - loss: 0.2784 - accuracy: 0.9064 - val_loss: 0.6095 - val_accuracy: 0.8125

Epoch 00017: val_accuracy did not improve from 0.89062
Epoch 18/20
24/24 [==============================] - 39s 2s/step - loss: 0.2319 - accuracy: 0.9075 - val_loss: 0.6705 - val_accuracy: 0.8594

Epoch 00018: val_accuracy did not improve from 0.89062
Epoch 19/20
24/24 [==============================] - 44s 2s/step - loss: 0.2551 - accuracy: 0.9055 - val_loss: 0.5466 - val_accuracy: 0.8594

Epoch 00019: val_accuracy did not improve from 0.89062
Epoch 20/20
24/24 [==============================] - 44s 2s/step - loss: 0.2061 - accuracy: 0.9233 - val_loss: 0.8651 - val_accuracy: 0.8281

Epoch 00020: val_accuracy did not improve from 0.89062
Test accuracy:  0.824999988079071
CPU times: total: 1d 13h 24min 8s
Wall time: 7h 23min 26s
Out[9]:
experiment_id repetition_id sr n_fft hop_length n_mels n_augmentation_per_train p_per_augmentation n_filters_l1 n_filters_l2 n_filters_l3 n_dense_layer batch_size epochs history_accuracy history_val_accuracy history_loss history_val_loss test_accuracy
0 0 0 44100 2048 512 128 0 0.0 64 32 32 150 64 20 [0.125, 0.203125, 0.2265625, 0.3515625, 0.3789... [0.1875, 0.203125, 0.34375, 0.390625, 0.53125,... [2.3199825286865234, 2.1081252098083496, 1.936... [2.1278252601623535, 1.9560778141021729, 1.773... 0.8000
1 0 1 44100 2048 512 128 0 0.0 64 32 32 150 64 20 [0.14453125, 0.27734375, 0.3125, 0.41796875, 0... [0.28125, 0.359375, 0.34375, 0.4375, 0.515625,... [2.2147462368011475, 1.989138126373291, 1.8013... [1.9855751991271973, 1.707120656967163, 1.6498... 0.8125
2 0 2 44100 2048 512 128 0 0.0 64 32 32 150 64 20 [0.15234375, 0.20703125, 0.24609375, 0.3632812... [0.21875, 0.265625, 0.375, 0.421875, 0.59375, ... [2.267695426940918, 2.135138988494873, 1.96798... [2.0432863235473633, 1.8673837184906006, 1.738... 0.8250
3 0 3 44100 2048 512 128 0 0.0 64 32 32 150 64 20 [0.12890625, 0.1875, 0.3046875, 0.375, 0.40234... [0.21875, 0.359375, 0.375, 0.453125, 0.578125,... [2.290428876876831, 2.0849320888519287, 1.8525... [2.068817615509033, 1.8218265771865845, 1.6570... 0.8125
4 0 4 44100 2048 512 128 0 0.0 64 32 32 150 64 20 [0.140625, 0.2265625, 0.31640625, 0.375, 0.441... [0.140625, 0.328125, 0.40625, 0.453125, 0.4687... [2.2845640182495117, 2.027144432067871, 1.8002... [2.0867950916290283, 1.7636537551879883, 1.625... 0.7250
In [20]:
# Load experiment results of run 1 and run 2
experiment_results_run_1_df = pd.read_pickle('run_20220914/experiment_results_df.pkl')
experiment_results_run_2_df = pd.read_pickle('run_20220915/experiment_results_df.pkl')
experiment_results_run_3_df = pd.read_pickle('run_20220916/experiment_results_df.pkl')

experiment_results_run_1_df['run'] = 'run_1'
experiment_results_run_2_df['run'] = 'run_2'
experiment_results_run_3_df['run'] = 'run_3'
In [21]:
# Run 1 summary
experiment_results_run_1_df.groupby('experiment_id').describe()['test_accuracy']
Out[21]:
count mean std min 25% 50% 75% max
experiment_id
0 20.0 0.781875 0.037931 0.700 0.759375 0.7875 0.812500 0.8375
1 20.0 0.811875 0.048915 0.725 0.784375 0.8125 0.850000 0.9000
2 20.0 0.846250 0.036296 0.775 0.821875 0.8500 0.865625 0.9000
In [22]:
# Run 2 summary
experiment_results_run_2_df.groupby('experiment_id').describe()['test_accuracy']
Out[22]:
count mean std min 25% 50% 75% max
experiment_id
0 20.0 0.774375 0.042044 0.6875 0.746875 0.77500 0.8000 0.8375
1 20.0 0.816250 0.038281 0.7250 0.796875 0.81875 0.8375 0.8875
2 20.0 0.852500 0.039653 0.7875 0.825000 0.85625 0.8750 0.9125
In [23]:
# Run 3 summary
experiment_results_run_3_df.groupby('experiment_id').describe()['test_accuracy']
Out[23]:
count mean std min 25% 50% 75% max
experiment_id
0 20.0 0.778125 0.041334 0.6875 0.75625 0.79375 0.803125 0.8250
1 20.0 0.830000 0.037478 0.7625 0.80000 0.83125 0.862500 0.8875
2 20.0 0.849375 0.038576 0.7625 0.84375 0.85625 0.875000 0.8875
In [30]:
# Combined stats for run 1, run 2 and run 3
experiment_results_df = pd.concat([experiment_results_run_1_df, experiment_results_run_2_df, experiment_results_run_3_df]).reset_index()
experiment_results_df.groupby('experiment_id').describe()['test_accuracy']
Out[30]:
count mean std min 25% 50% 75% max
experiment_id
0 60.0 0.778125 0.039904 0.6875 0.750000 0.7875 0.803125 0.8375
1 60.0 0.819375 0.041898 0.7250 0.796875 0.8250 0.850000 0.9000
2 60.0 0.849375 0.037636 0.7625 0.825000 0.8500 0.875000 0.9125
In [31]:
# Top 5 results are all form augmentated trainings with best accuracy at 91% with 5-fold augmentation
experiment_results_df.sort_values(by='test_accuracy', ascending=False).head()
Out[31]:
index experiment_id repetition_id sr n_fft hop_length n_mels n_augmentation_per_train p_per_augmentation n_filters_l1 ... n_filters_l3 n_dense_layer batch_size epochs history_accuracy history_val_accuracy history_loss history_val_loss test_accuracy run
116 56 2 16 44100 2048 512 128 5 0.5 64 ... 32 150 64 20 [0.208984375, 0.4173177182674408, 0.548828125,... [0.421875, 0.6875, 0.734375, 0.796875, 0.8125,... [2.0674281120300293, 1.5617119073867798, 1.211... [1.6140172481536865, 1.0168728828430176, 0.739... 0.9125 run_2
107 47 2 7 44100 2048 512 128 5 0.5 64 ... 32 150 64 20 [0.19921875, 0.3743489682674408, 0.501953125, ... [0.390625, 0.5625, 0.71875, 0.734375, 0.75, 0.... [2.111309289932251, 1.6299391984939575, 1.3285... [1.6934683322906494, 1.258545160293579, 0.8326... 0.9125 run_2
106 46 2 6 44100 2048 512 128 5 0.5 64 ... 32 150 64 20 [0.1712239533662796, 0.3567708432674408, 0.508... [0.28125, 0.609375, 0.703125, 0.765625, 0.75, ... [2.1432950496673584, 1.6811304092407227, 1.308... [1.7030054330825806, 1.169342041015625, 0.9625... 0.9000 run_2
111 51 2 11 44100 2048 512 128 5 0.5 64 ... 32 150 64 20 [0.2161458283662796, 0.4153645932674408, 0.549... [0.484375, 0.609375, 0.8125, 0.734375, 0.84375... [2.040602445602417, 1.561587929725647, 1.20079... [1.5753347873687744, 1.0077776908874512, 0.704... 0.9000 run_2
45 45 2 5 44100 2048 512 128 5 0.5 64 ... 32 150 64 20 [0.1875, 0.3776041567325592, 0.51953125, 0.620... [0.375, 0.671875, 0.734375, 0.8125, 0.828125, ... [2.1302452087402344, 1.6840261220932007, 1.295... [1.6193472146987915, 1.0902442932128906, 0.756... 0.9000 run_1

5 rows × 21 columns

In [35]:
# Show accuracy distribution per experiment
sns.displot(data=experiment_results_df, x='test_accuracy', hue='experiment_id', multiple='stack')
plt.title('Distribution per Experiment')
plt.show()
In [37]:
# Only no augmentation and 5-fold augmentation
sns.displot(data=experiment_results_df[experiment_results_df['experiment_id']!=1], x='test_accuracy', hue='experiment_id', multiple='stack')
plt.title('Distribution per Experiment')
plt.show()
In [40]:
# Show training history of best model
best_model = experiment_results_df[(experiment_results_df['experiment_id']==2) & (experiment_results_df['repetition_id']==16) & (experiment_results_df['run']=='run_2')]
history_accuracy = best_model['history_accuracy'].values[0]
history_val_accuracy = best_model['history_val_accuracy'].values[0]
history_loss = best_model['history_loss'].values[0]
history_val_loss = best_model['history_val_loss'].values[0]

# Plot training history
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))

axs[0].plot(history_accuracy)
axs[0].plot(history_val_accuracy)
axs[0].set_title('model accuracy')
axs[0].set_ylabel('accuracy')
axs[0].set_xlabel('epoch')
axs[0].set_ylim(0,1)
axs[0].legend(['train', 'val'], loc='lower right')

axs[1].plot(history_loss)
axs[1].plot(history_val_loss)
axs[1].set_title('model loss')
axs[1].set_ylabel('loss')
axs[1].set_xlabel('epoch')
axs[1].legend(['train', 'val'], loc='upper right')

fig.show()
In [41]:
# Load test data of best experiment
X_test_data = np.load('run_20220915/X_test_std_2.npy')
y_test_data = np.load('run_20220915/y_test_org_2.npy')

# Evaluate on best model weights
n_rows = X_test_data.shape[1]
n_cols = X_test_data.shape[2]

# Build and train CNN
model = Sequential([
    Conv2D(filters=64, kernel_size=10, strides=2, padding='same', activation='relu', input_shape=(n_rows, n_cols, 1)),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=32, kernel_size=10, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Conv2D(filters=32, kernel_size=5, strides=2, padding='same', activation='relu'),
    MaxPool2D(pool_size=2, strides=2, padding='same'),
    Flatten(),
    Dropout(0.5),
    Dense(units=150, activation='relu'),
    Dense(units=10, activation='softmax')
])

# Compile the model with adam optimizer and default settings
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Load best weights
weights_path = f'run_20220915/models/best_model_esc10_exp_2_16' # Best model
model.load_weights(weights_path)

# Evaluate
test_loss, test_accuracy = model.evaluate(X_test_data, y_test_data)

print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')

y_pred_proba = model.predict(X_test_data)
y_pred_test = np.array([np.argmax(y) for y in y_pred_proba])
y_true_test = np.array([np.argmax(y) for y in y_test_data])

# Classification Report
print(classification_report(y_true_test, y_pred_test, target_names=label_names))

# Confusion matrix
fig, ax = plt.subplots(figsize=(10,10))
cmp = ConfusionMatrixDisplay.from_predictions(y_true_test, y_pred_test, display_labels=label_names, xticks_rotation='vertical', ax=ax)
plt.title('Confusion Matrix Best Model')
plt.show()
WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_test_function.<locals>.test_function at 0x000001E187E50280> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
3/3 [==============================] - 0s 87ms/step - loss: 0.3068 - accuracy: 0.9094
Test loss: 0.3201620578765869
Test accuracy: 0.9125000238418579
                precision    recall  f1-score   support

           dog       0.89      1.00      0.94         8
       rooster       1.00      1.00      1.00         8
          rain       1.00      0.62      0.77         8
     sea_waves       0.67      0.75      0.71         8
crackling_fire       1.00      0.88      0.93         8
   crying_baby       1.00      1.00      1.00         8
      sneezing       1.00      1.00      1.00         8
    clock_tick       0.89      1.00      0.94         8
    helicopter       0.78      0.88      0.82         8
      chainsaw       1.00      1.00      1.00         8

      accuracy                           0.91        80
     macro avg       0.92      0.91      0.91        80
  weighted avg       0.92      0.91      0.91        80

2.5. Transfer learning with YAMNet (without data augmentation)¶

Besides training a model from scratch, there is also the possiblity to use transfer learning and profit from a pre-trained model trained on a much larger dataset.

For audio classification one of such a pre-trained model ist YAMNet (Yet another mobile net) from Google, see:

  • https://www.tensorflow.org/tutorials/audio/transfer_learning_audio
  • https://keras.io/examples/audio/uk_ireland_accent_recognition/
  • https://tfhub.dev/google/yamnet/1

YAMNet is pre-trained on the AudioSet corpus (521 different audio classes) and can be used to calculate audio embeddings for wav files which then can be used for a neural network classifier. The model is available on TensorFlow Hub.

For more details on YAMNet check out above links.

In [88]:
# Load YAMNet from TF Hub
yamnet_model_handle = 'https://tfhub.dev/google/yamnet/1'
yamnet_model = hub.load(yamnet_model_handle)
In [89]:
source_path = Path('../ESC-50-master/audio')
metadata_path = os.path.join('../ESC-50-master/meta/esc50.csv')
metadata_df = pd.read_csv(metadata_path)
metadata_esc10_df = metadata_df[metadata_df['esc10']]
esc10_files = metadata_esc10_df['filename'].values
In [90]:
# Remap original labels for ESC10 data
num_classes = 10

label_map = {
    0: 0,  # dog
    1: 1,  # rooster
    10: 2, # rain
    11: 3, # sea_waves
    12: 4, # crackling_fire
    20: 5, # crying_baby
    21: 6, # sneezing
    38: 7, # clock_tick
    40: 8, # helicopter
    41: 9  # chainsaw
}

label_names = [
    'dog',
    'rooster',
    'rain',
    'sea_waves',
    'crackling_fire',
    'crying_baby',
    'sneezing',
    'clock_tick',
    'helicopter',
    'chainsaw'
]
In [91]:
# Original example with 16khz sample rate
example_path = os.path.join(source_path, '3-164688-A-38.wav') # clock tick
signal, sr = librosa.load(example_path, sr=16000)

Audio(signal, rate=sr)
Out[91]:
Your browser does not support the audio element.
In [92]:
# YAMNet returns scores for its 512 categories (which we don't use), embeddings with length 1024 and mel-spectorgram
scores, embeddings, spectrogram = yamnet_model(signal)
In [93]:
# YAMNet returns multiple samples from an audio file because they use sliding windows of fixed lengths to create the embeddings
# See details (Chapter Inputs): https://tfhub.dev/google/yamnet/1
# For our 5 seconds audio signal we get 10 embedding vectors with length 1024
embeddings_arr = embeddings.numpy()
embeddings_arr.shape
Out[93]:
(10, 1024)
In [94]:
embeddings_arr[5][400:450]
Out[94]:
array([0.        , 0.        , 0.        , 0.04426242, 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.02839468,
       0.        , 0.49809802, 0.15779917, 0.        , 0.        ,
       0.        , 0.02080138, 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.01159444, 0.        , 0.36942562,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.49283254, 0.        , 0.        , 0.        , 0.03925116,
       0.        , 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.28671196, 0.        , 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.        ],
      dtype=float32)
In [95]:
# Load the ESC-10 data with 16khz and extract embeddings
sr = 16000
preprocessed_data = []

for file in tqdm(esc10_files):

    # Load file
    file_path = os.path.join(source_path, file)
    signal, sr = librosa.load(file_path, sr=sr)

    # Get embeddings from YAMNet (we don't need score and spectrogram)
    _, embeddings, _ = yamnet_model(signal)
    
    # Work with numpy array
    embeddings_arr = embeddings.numpy()

    # Extract class label from filename
    label_org = int(file.split('-')[-1].split('.')[0])

    # Get new label from label map
    label = label_map[label_org]
    
    # Loop through all samples return by YAMNet and attach same label
    for embeddings in embeddings_arr:
    
        preprocessed_data.append({
            'file': file,
            'label': label,
            'signal': signal,
            'embeddings': embeddings
        })

preprocessed_data_df = pd.DataFrame(preprocessed_data)
preprocessed_data_df.head()
100%|██████████| 400/400 [01:01<00:00,  6.46it/s]
Out[95]:
file label signal embeddings
0 1-100032-A-0.wav 0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [2.0761294, 0.39272168, 0.5436222, 0.07558242,...
1 1-100032-A-0.wav 0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [2.0761294, 0.39272168, 0.5436222, 0.07558242,...
2 1-100032-A-0.wav 0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [2.0761294, 0.39272168, 0.5436222, 0.07558242,...
3 1-100032-A-0.wav 0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [0.0, 0.0, 3.1225166, 0.0, 0.0, 0.0, 0.0, 0.0,...
4 1-100032-A-0.wav 0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... [0.0, 0.0, 1.8431135, 0.0, 0.0, 0.0, 0.0, 0.0,...
In [98]:
# Save data
preprocessed_data_df.to_pickle('preprocessed_embeddings_esc10.pkl')
In [96]:
# Load data
preprocessed_data_df = pd.read_pickle('preprocessed_embeddings_esc10.pkl')
In [97]:
# We get 4000 embeddings out of the initial 400 records
preprocessed_data_df.shape
Out[97]:
(4000, 4)
In [98]:
X = np.array(list(preprocessed_data_df['embeddings'].values))
X.shape
Out[98]:
(4000, 1024)
In [99]:
# Target variable
y = preprocessed_data_df['label'].values
y = to_categorical(y, num_classes=num_classes)

# Filenames
files = preprocessed_data_df['file'].values
In [100]:
# Train/test split with stratify on y (we want all digits being evenly represented in train and test)
X_train, X_test, y_train, y_test, files_train, files_test = train_test_split(X, y, files, test_size=0.2, stratify=y, random_state=42)

# Split train set from above in train and valid set, so we have train, valid and test set
X_train, X_valid, y_train, y_valid, files_train, files_valid = train_test_split(X_train, y_train, files_train, test_size=0.2, stratify=y_train, random_state=42)

print('X_train: ', X_train.shape)
print('y_train: ', y_train.shape)
print('files_train: ', files_train.shape)
print('X_valid: ', X_valid.shape)
print('y_valid: ', y_valid.shape)
print('files_valid: ', files_valid.shape)
print('X_test: ', X_test.shape)
print('y_test: ', y_test.shape)
print('files_test: ', files_test.shape)
X_train:  (2560, 1024)
y_train:  (2560, 10)
files_train:  (2560,)
X_valid:  (640, 1024)
y_valid:  (640, 10)
files_valid:  (640,)
X_test:  (800, 1024)
y_test:  (800, 10)
files_test:  (800,)
In [101]:
# Check if files are shuffled during train-test-split
files_train
Out[101]:
array(['1-172649-C-40.wav', '3-157615-A-10.wav', '1-172649-F-40.wav', ...,
       '2-50666-A-20.wav', '5-198411-A-20.wav', '5-194533-A-21.wav'],
      dtype=object)
In [110]:
# Simple Sequential Dense model
model = Sequential([
    Dense(512, activation='relu', input_shape=(1024,)),
    Dense(num_classes, activation='softmax')
])

model.summary()

# Model checkpoint to save best model
checkpoint_path = f'models/best_model_esc10_pretrained'
checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')

# Compile the model with adam optimizer and default settings
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Fit
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid), batch_size=128, callbacks=[checkpoint], verbose=1)
Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_12 (Dense)             (None, 512)               524800    
_________________________________________________________________
dense_13 (Dense)             (None, 10)                5130      
=================================================================
Total params: 529,930
Trainable params: 529,930
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
20/20 [==============================] - 0s 12ms/step - loss: 1.5948 - accuracy: 0.5970 - val_loss: 0.7058 - val_accuracy: 0.8359

Epoch 00001: val_accuracy improved from -inf to 0.83594, saving model to models\best_model_esc10_pretrained
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
Epoch 2/20
20/20 [==============================] - 0s 7ms/step - loss: 0.5295 - accuracy: 0.8503 - val_loss: 0.4126 - val_accuracy: 0.8687

Epoch 00002: val_accuracy improved from 0.83594 to 0.86875, saving model to models\best_model_esc10_pretrained
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
Epoch 3/20
20/20 [==============================] - 0s 7ms/step - loss: 0.3924 - accuracy: 0.8767 - val_loss: 0.4216 - val_accuracy: 0.8766

Epoch 00003: val_accuracy improved from 0.86875 to 0.87656, saving model to models\best_model_esc10_pretrained
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
Epoch 4/20
20/20 [==============================] - 0s 7ms/step - loss: 0.3585 - accuracy: 0.8903 - val_loss: 0.3969 - val_accuracy: 0.8438

Epoch 00004: val_accuracy did not improve from 0.87656
Epoch 5/20
20/20 [==============================] - 0s 8ms/step - loss: 0.3475 - accuracy: 0.8888 - val_loss: 0.3631 - val_accuracy: 0.8859

Epoch 00005: val_accuracy improved from 0.87656 to 0.88594, saving model to models\best_model_esc10_pretrained
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
Epoch 6/20
20/20 [==============================] - 0s 7ms/step - loss: 0.3430 - accuracy: 0.8869 - val_loss: 0.4016 - val_accuracy: 0.8516

Epoch 00006: val_accuracy did not improve from 0.88594
Epoch 7/20
20/20 [==============================] - 0s 8ms/step - loss: 0.3126 - accuracy: 0.8958 - val_loss: 0.3595 - val_accuracy: 0.8859

Epoch 00007: val_accuracy did not improve from 0.88594
Epoch 8/20
20/20 [==============================] - 0s 7ms/step - loss: 0.2768 - accuracy: 0.9223 - val_loss: 0.5470 - val_accuracy: 0.8531

Epoch 00008: val_accuracy did not improve from 0.88594
Epoch 9/20
20/20 [==============================] - 0s 7ms/step - loss: 0.3810 - accuracy: 0.9033 - val_loss: 0.3790 - val_accuracy: 0.8641

Epoch 00009: val_accuracy did not improve from 0.88594
Epoch 10/20
20/20 [==============================] - 0s 8ms/step - loss: 0.3054 - accuracy: 0.9082 - val_loss: 0.3423 - val_accuracy: 0.8578

Epoch 00010: val_accuracy did not improve from 0.88594
Epoch 11/20
20/20 [==============================] - 0s 7ms/step - loss: 0.2958 - accuracy: 0.9108 - val_loss: 0.3475 - val_accuracy: 0.8531

Epoch 00011: val_accuracy did not improve from 0.88594
Epoch 12/20
20/20 [==============================] - 0s 7ms/step - loss: 0.2262 - accuracy: 0.9205 - val_loss: 0.3108 - val_accuracy: 0.8906

Epoch 00012: val_accuracy improved from 0.88594 to 0.89062, saving model to models\best_model_esc10_pretrained
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
Epoch 13/20
20/20 [==============================] - 0s 8ms/step - loss: 0.2082 - accuracy: 0.9305 - val_loss: 0.3348 - val_accuracy: 0.8609

Epoch 00013: val_accuracy did not improve from 0.89062
Epoch 14/20
20/20 [==============================] - 0s 8ms/step - loss: 0.2215 - accuracy: 0.9208 - val_loss: 0.3215 - val_accuracy: 0.8953

Epoch 00014: val_accuracy improved from 0.89062 to 0.89531, saving model to models\best_model_esc10_pretrained
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
INFO:tensorflow:Assets written to: models\best_model_esc10_pretrained\assets
Epoch 15/20
20/20 [==============================] - 0s 8ms/step - loss: 0.1984 - accuracy: 0.9279 - val_loss: 0.3087 - val_accuracy: 0.8734

Epoch 00015: val_accuracy did not improve from 0.89531
Epoch 16/20
20/20 [==============================] - 0s 8ms/step - loss: 0.1874 - accuracy: 0.9365 - val_loss: 0.3163 - val_accuracy: 0.8922

Epoch 00016: val_accuracy did not improve from 0.89531
Epoch 17/20
20/20 [==============================] - 0s 7ms/step - loss: 0.1737 - accuracy: 0.9362 - val_loss: 0.3508 - val_accuracy: 0.8594

Epoch 00017: val_accuracy did not improve from 0.89531
Epoch 18/20
20/20 [==============================] - 0s 9ms/step - loss: 0.2372 - accuracy: 0.9237 - val_loss: 0.3202 - val_accuracy: 0.8922

Epoch 00018: val_accuracy did not improve from 0.89531
Epoch 19/20
20/20 [==============================] - 0s 8ms/step - loss: 0.1964 - accuracy: 0.9327 - val_loss: 0.3588 - val_accuracy: 0.8906

Epoch 00019: val_accuracy did not improve from 0.89531
Epoch 20/20
20/20 [==============================] - 0s 7ms/step - loss: 0.2620 - accuracy: 0.9180 - val_loss: 0.3202 - val_accuracy: 0.8922

Epoch 00020: val_accuracy did not improve from 0.89531
In [111]:
# Load best model
model = tf.keras.models.load_model(checkpoint_path)

# Plot training history
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))

# Plot training accuracy history
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('model accuracy')
axs[0].set_ylabel('accuracy')
axs[0].set_xlabel('epoch')
axs[0].set_ylim(0,1)
axs[0].legend(['train', 'val'], loc='lower right')

axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('model loss')
axs[1].set_ylabel('loss')
axs[1].set_xlabel('epoch')
axs[1].legend(['train', 'val'], loc='upper right')

plt.show()

test_loss, test_accuracy = model.evaluate(X_test, y_test)

print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')

y_pred_proba = model.predict(X_test)
y_pred_test = np.array([np.argmax(y) for y in y_pred_proba])
y_true_test = np.array([np.argmax(y) for y in y_test])

# Classification Report
print(classification_report(y_true_test, y_pred_test, target_names=label_names))

# Confusion matrix
fig, ax = plt.subplots(figsize=(10,10))
cmp = ConfusionMatrixDisplay.from_predictions(y_true_test, y_pred_test, display_labels=label_names, xticks_rotation='vertical', ax=ax)
plt.show()
25/25 [==============================] - 0s 1ms/step - loss: 0.3034 - accuracy: 0.8950
Test loss: 0.30342915654182434
Test accuracy: 0.8949999809265137
                precision    recall  f1-score   support

           dog       0.98      0.76      0.86        80
       rooster       0.88      0.55      0.68        80
          rain       0.95      0.97      0.96        80
     sea_waves       0.97      0.95      0.96        80
crackling_fire       0.95      0.96      0.96        80
   crying_baby       0.96      0.94      0.95        80
      sneezing       0.58      0.95      0.72        80
    clock_tick       0.93      0.96      0.94        80
    helicopter       0.97      0.96      0.97        80
      chainsaw       0.99      0.94      0.96        80

      accuracy                           0.90       800
     macro avg       0.92      0.89      0.90       800
  weighted avg       0.92      0.90      0.90       800

Summary and further work¶

Summary

  • Spectrograms (incl. Mel) as well as MFCC used as features for a CNN can achieve very good performances on simple datasets (e.g. AudioMNIST)
  • Data augmentation clearly improves performance on more difficult datasets with fewer data points (e.g. ESC-10)
  • Many parameter options along the whole pipeline (e.g. spectrogram configurations for feature extraction, augmentation techniques and ranges, neural network architecture, training/optimizer)
  • Using pre-trained models can achieve the same performances on ESC-10 dataset, but without the help of data augmentation

Further work

  • Explore other commonly used features in audio classification (Chroma features, Zero Crossing Rate, etc.)
  • Use data augmentation on spectrograms (Frequency Masking, etc.) instead only on waveform
  • Online augmentation as part of neural network training
  • Explore further pre-trained models and build models based on pre-trained spectrograms
  • Develop pipelines suitable for larger datasets
  • Look into techniques to classify audio that contains multiple classes (i.e. multi-label classification problem), see https://mct-master.github.io/machine-learning/2020/09/20/classifying-urban-sounds.html as an inspiration how to create such a dataset by overlaying audio streams
  • Look into algorithms such as YOHO (You Only Hear Once), see https://arxiv.org/abs/2109.00962

References¶

  • https://towardsdatascience.com/preprocess-audio-data-with-the-signal-envelope-499e6072108
  • https://github.com/soerenab/AudioMNIST/blob/master/preprocess_data.py
  • https://www.researchgate.net/publication/347356900_Audio_Pre-Processing_For_Deep_Learning
  • https://towardsdatascience.com/understanding-audio-data-fourier-transform-fft-spectrogram-and-speech-recognition-a4072d228520
  • https://dropsofai.com/sound-wave-basics-every-data-scientist-must-know-before-starting-analysis-on-audio-data/
  • https://www.section.io/engineering-education/machine-learning-for-audio-classification/#differences-between-sound-and-audio
  • https://www.kdnuggets.com/2020/02/audio-data-analysis-deep-learning-python-part-1.html
  • https://medium.com/gradientcrescent/urban-sound-classification-using-convolutional-neural-networks-with-keras-theory-and-486e92785df4
  • https://towardsdatascience.com/audio-classification-using-fastai-and-on-the-fly-frequency-transforms-4dbe1b540f89
  • https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53
  • https://blog.paperspace.com/introduction-to-audio-analysis-and-synthesis/
  • https://www.youtube.com/watch?v=m3XbqfIij_Y (Basics Audio Processing)
  • https://www.youtube.com/watch?v=Oa_d-zaUti8 (Preprocessing Audio for Deep Learning)
  • https://www.youtube.com/watch?v=szyGiObZymo (Music Genre Classification)
  • https://www.tutorialexample.com/understand-n_fft-hop_length-win_length-in-audio-processing-librosa-tutorial/ (hop_lenght, nfft, win_length)
  • https://towardsdatascience.com/audio-deep-learning-made-simple-sound-classification-step-by-step-cebc936bbe5
  • https://colab.research.google.com/github/enzokro/clck10/blob/master/_notebooks/2020-09-10-Normalizing-spectrograms-for-deep-learning.ipynb (Normalize spectrograms)
  • https://towardsdatascience.com/audio-deep-learning-made-simple-part-3-data-preparation-and-augmentation-24c6e1f6b52
  • https://medium.com/analytics-vidhya/simplifying-audio-data-fft-stft-mfcc-for-machine-learning-and-deep-learning-443a2f962e0e (STFT, MFCC, FFT)
  • https://www.nti-audio.com/en/support/know-how/fast-fourier-transform-fft#:~:text=The%20sampling%20rate%20or%20sampling,2%5E10%20%3D%201024%20samples) (FFT parameters)
  • https://towardsdatascience.com/all-you-need-to-know-to-start-speech-processing-with-deep-learning-102c916edf62 (Basics)
  • https://www.researchgate.net/figure/Performance-chart-of-the-model-in-the-ESC-10-dataset_tbl3_344519283 (Results with ESC-10)
  • https://www.tensorflow.org/tutorials/audio/transfer_learning_audio (Transfer Learning based on a Google pre-trained model YAMNet)
In [ ]: